# interesting science stories



## ae1905

NASA's Plan To Use A Giant Magnet To Make Mars Habitable


----------



## ae1905

*Scientists turn mammalian cells into complex biocomputers*

sciencemag.org 

By Robert F. ServiceMar. 27, 2017 , 11:00 AM








Adding genetic circuits to cells lets researchers control their actions, setting the stage for new ways to treat cancer and other diseases.
ktsimage/iStockphoto 

Computer hardware is getting a softer side. A research team has come up with a way of genetically engineering the DNA of mammalian cells to carry out complex computations, in effect turning the cells into biocomputers. The group hasn’t put those modified cells to work in useful ways yet, but down the road researchers hope the new programming techniques will help improve everything from cancer therapy to on-demand tissues that can replace worn-out body parts.

Engineering cells to function like minicomputers isn’t new. As part of the growing field of synthetic biology, research teams around the globe have been manipulating DNA for years to make cells perform simple actions like lighting up when oxygen levels drop. To date, most such experiments have been done in _Escherichia coli_ and other bacteria, because their genes are relatively easy to manipulate. Researchers have also managed to link multiple genetic circuits together within a single cell to carry out more complex calculations in bacteria.

Scientists have tried to extend this to mammalian cells to create genetic circuitry that can help detect and treat human diseases. But efforts to construct large-scale genetic circuits in mammalian cells have largely failed: For complex circuits to work, the individual components—the turning on and off of different genes—must happen consistently. The most common way to turn a gene on or off is by using proteins called transcription factors that bind to and regulate the expression of a specific gene. The problem is these transcription factors “all behave slightly differently,” says Wilson Wong, a synthetic biologist at Boston University.

To upgrade their DNA “switches,” Wong and his colleagues steered clear of transcription factors and instead switched human kidney cell genes on and off using scissorlike enzymes that selectively cut out snippets of DNA. These enzymes, known as DNA recombinases, recognize two target stretches of DNA, each between 30 to 50 or more base pairs long. When a recombinase finds its target DNA stretches, it cuts out any DNA in between, and stitches the severed ends of the double helix back together. 

To design genetic circuits, Wong and his colleagues use the conventional cellular machinery that reads out a cell’s DNA, transcribes its genes into RNA, and then translates the RNA into proteins. This normal gene-to-protein operation is initiated by another DNA snippet, a promoter, that sits just upstream of a gene. When a promoter is activated, a molecule called RNA polymerase gets to work, marching down the DNA strand and producing an RNA until it reaches another DNA snippet—a termination sequence—that tells it to stop.

To make one of their simplest circuits, Wong’s team inserted four extra snippets of DNA after a promoter. The main one produced green fluorescent protein (GFP), which lights up cells when it is produced. But in front of it was a termination sequence, flanked by two snippets that signaled the DNA recombinase. Wong and his team then inserted another gene in the same cell that made a modified recombinase, activated only when bound to a specific drug; without it, the recombinase wouldn’t cut the DNA.

When the promoter upstream of the _GFP_ gene was activated, the RNA polymerase ran headfirst into the termination sequence, stopped reading the DNA, and didn’t produce the fluorescent protein. But when the drug was added, the recombinase switched on and spliced out the termination sequence that was preventing the RNA polymerase from initiating production of GFP. Voila, the cell lit up. 

As if that Rube Goldbergian feat weren’t enough, Wong and his colleagues also showed that by adding additional recombinases together with different target strands, they could build a wide variety of circuits, each designed to carry out a different logical operation. The approach worked so well that the team built 113 different circuits, with a 96.5% success rate, they report today in Nature Biotechnology. As a further demonstration, they engineered human cells to produce a biological version of something called a Boolean logic lookup table. The circuit in this case has six different inputs, which can combine in different ways to execute one of 16 different logical operations.

“It’s exciting in that it represents another scale at which we can design mammalian genetic circuits,” says Timothy Lu, a synthetic biologist at the Massachusetts Institute of Technology in Cambridge. Although the current circuits are a proof of concept, both Lu and Wong say synthetic biologists want to use them to create new medical therapies. For example, scientists could engineer T cells, sentinels of the immune system, with genetic circuits that initiate a response to wipe out tumors when they detect the presence of two or three “biomarkers” produced by cancer cells, Lu says. Another example being explored by Wong and others is to engineer stem cells so they develop into specific cell types when prompted by different signals. This could let synthetic biologists generate tissues on demand, such as insulin-producing β cells, or cartilage-producing chondrocytes.


----------



## ae1905

Your Cat Thinks You're Coolhttps://www.scientificamerican.com/podcast/episode/your-cat-thinks-youre-cool/


----------



## ae1905

*Red Planet versus Dead Planet: Scientists Debate Next Destination for Astronauts in S*

scientificamerican.com 

Leonard David

THE WOODLANDS, Texas—Should the U.S. send humans back to the moon in a 21st-century reboot of the cold war–era Apollo program…or should the nation go full-throttle and for the gusto, sending crews to all the way to Mars, where none have gone before? U.S. scientists and policy makers have grappled ad nauseamwith America’s next great otherworldly destination for decades, without making much meaningful progress. Now that it is approaching a half-century since an American—or anyone at all, for that matter—last left low Earth orbit, the debate seems lost in space.

Soon that shall change, many advocates of human spaceflight believe, through a hybrid of new initiatives by Pres. Donald Trump’s administration as well as commercial efforts led by private industry. The Trump White House’s vision for U.S. astronauts remains at present a foggy TBD, but there are plans afoot to relaunch a National Space Council. Helmed by Vice Pres. Mike Pence, the council would set a new space agenda not only for NASA but also for U.S. rocket companies, big and small, such as SpaceX, Blue Origin, Boeing, Lockheed Martin and Orbital ATK.

In the meantime, speculation about the U.S.'s future in space has reached its highest point in recent memory, as made clear here last week by the proceedings of the 48th Lunar and Planetary Science Conference (LPSC). At the meeting, scientists unleashed the latest findings regarding Earth’s moon, Mars, asteroids, comets and myriad other cosmic objects of interest, often with a hopeful eye toward rekindling human voyages to other worlds. Although robotic probes are the persistent currency of discovery in today’s planetary science, many researchers increasingly see astronauts as crucial agents of exploration in the not-too-distant future.

*Destination Moon*

“Planetary science will completely change once we get crew beyond low Earth orbit,” says David Kring, a senior staff scientist at the Lunar and Planetary Institute. “The best way to explore the moon is by the well-trained astronaut, hands down. Apollo demonstrated that wonderfully.”

Kring says he is eager to see the first NASA exploration missions using the agency’s Space Launch System (SLS) rocket, which is currently being developed along with a crewed Orion spacecraft. At the Trump administration’s insistence, NASA is assessing the prospect of flying a two-person crew around the moon in mid-2019—years ahead of schedule for the delay-plagued SLS and Orion programs. “I’m even more anxious to see crews deploy robotic assets to the lunar surface and eventually land there themselves,” Kring adds. “We need to get back on the surface. We need to collect samples. And we need to bring them back to Earth.”

*A Scientific Bonanza*

The moon is a bonanza for scientists, Kring says, because it offers crucial insights for understanding the origins and evolution of Earth and other planets: how they formed from the accretion and differentiation of smaller bodies; how they were bombarded by impacts early in their histories; and even how some of them migrated in their orbits around the sun. “The best place to answer those questions is on the moon,” he explains, given that its airless surface contains the scarcely altered imprints of 4.5 billion years of solar system history.







You can’t be a Martian without being a lunatic, suggests Clive Neal, a lunar scientist at the University of Notre Dame. Credit: Barbara David Here on Earth destructive geologic processes cloud our view of those long-gone formative eons, Kring says. Even on modern-day Mars, a planet far more inert than Earth, many of the answers we might seek to our solar system’s deepest mysteries have been erased by the slow workings of geology.

Kring also sees the moon as a gateway to Mars. “We have to have legitimate, meaningful milestones on our way to Mars,” he explains. “We all want to get humans on Mars. The question is how do you get there? I don’t think we’re going to develop the right workforce with the capabilities to magically get to Mars by 2035 or 2045. We need to develop the techniques and the workforce for that leap, and that can happen in [lunar orbit] and on the moon”

*Every Martian Is a Lunatic*

According to Clive Neal, a lunar scientist at the University of Notre Dame, any moon-versus-Mars argument is a nonstarter. “It’s not either-or,” he says, because the moon can enable Mars by tapping lunar resources to support a sustainable human expansion deeper into the solar system.

“You can’t be a Martian without being a lunatic,” Neal says. “If you want to do ‘flags and footprints,’ go to Mars now. But you’ll never go back, because that’s Apollo—a fantastic program, but it was not sustainable.”

To Neal, Earth's satellite is first and foremost a world rich in resources that can and should be used. For example, he pointed to sun-shy craters at the lunar poles, where near-constant darkness has trapped and preserved water ice ripe for conversion into oxygen, water and rocket propellant. “We have to do some basic geologic prospecting,” he says. "And if the moon’s resources are shown to be substantial, “you then bring the Moon into our economic sphere of influence. I view the moon as enabling, and that comes through its resources.”

*Apollo Dreams*

Speaking at a breakout session prior to the formal start of the LPSC gathering last week, _Apollo 17_ moon walker and geologist Jack Schmitt reflected on the value of human exploration of the moon. It had been nearly 45 years since Schmitt bunny-hopped his way across the low-gravity lunar landscape in December 1972 during the final Apollo mission; half of Apollo’s 12 moon walkers have now died. With the passing of his _Apollo 17_ crewmate, Gene Cernan, earlier this year, Schmitt spoke as the last living person from that mission to have set foot on the moon.

Schmitt’s speech raised issues familiar to many in the audience. For decades, he has championed the potential economics of lunar mining for helium 3, an isotope that could be crucial for certain forms of nuclear fusion. The lunar surface has soaked up vast quantities of helium 3 from billions of years of bombardment by the solar wind, Schmitt explained, and drawing on that resource is how a lunar settlement could support itself. Provided, that is, that scientists back on Earth can first figure out how to make nuclear fusion an economically viable power source—a goal that has eluded them for decades.
Schmitt’s faith in a lunar future for humankind is unwavering. “A settlement on the Moon based on helium 3 export to Earth for fusion power makes a lot of sense to me. It starts not only to make us a two-planet species but enables, I think, Mars exploration in many different ways,” he noted.







_Apollo 17_ moon walker and geologist Jack Schmitt champions the possible economics of mining helium 3 on the moon. Credit: Barbara David For example, he said, helium 3 mining would produce by-products including water, hydrogen, carbon and nitrogen. These useful substances exist in only the most minuscule traces in lunar soil—but such an enormous amount of surface material would have to be processed to harvest helium 3 that they would accumulate in significant amounts. Water sourced from the low-gravity moon, Schmitt explained, could be utilized as a protective, radiation-thwarting cocoon, built into the superstructures of Mars-bound crewed spacecraft. “A few inches of water around a spacecraft weights an awful lot and it is expensive bringing it from Earth. You can produce water anywhere on the moon,” he said.

*Red Planet Runs*

Others have little time for the moon and the decades that would be required to develop infrastructure there. Their eyes are instead on the bigger prize: Mars. Elon Musk, SpaceX’s CEO and chief rocketeer, is the foremost example of the “Mars first” contingent. And according to SpaceX engineer Paul Wooster, Red Planet planning by Musk’s private company is steadily progressing. “The vision for SpaceX, long-term, is making it possible for large numbers of people to go to Mars,” he says.

SpaceX plans to build a mega-rocket and a giant interplanetary crew transporter to populate Martian outposts and eventually a full-size city, Wooster reports. But before the company can achieve those wild goals it must first firm up its capability to send something—anything at all—to Mars. That would come via interplanetary flights of the firm’s Red Dragon spacecraft—a derivative of the SpaceX Dragon capsule that has already hauled cargo to the International Space Station and in due course will take astronauts there. Wooster says SpaceX is intent on rapidly building up surface infrastructure on Mars, hopefully beginning by the mid-2020s. “We obviously have a lot of work ahead of us,” he says.

A crucial component of the SpaceX plan for Mars has been demonstrated several times here on Earth. The company has repeatedly landed its Falcon 9 rocket’s first stage at sea on a drone ship, and on land at Florida’s Cape Canaveral Air Force Station. Because the first stage can then be repeatedly reused rather than flown once and discarded, an economy of scale could develop that greatly reduces the cost of access to space, and thus the price tag of a bank-busting plan to colonize Mars. For SpaceX’s ambitious plans to work, the company will have to develop and demonstrate reusability on its next generation of rockets poised to debut after Falcon 9.

Wooster says an unpiloted SpaceX Red Dragon flight to Mars, able to deliver roughly one ton of useful payload, is being considered for 2020. Other Red Dragons could follow every two years or so, when Mars and Earth are in favorable alignments that minimize the fuel needed for an interplanetary crossing.







SpaceX Red Dragon nears autopilot touchdown on Mars. The private firm has the Red Planet in its sights to establish an outpost, and eventually a city, on that distant world. 

Credit: SpaceX *SpaceX Marks the Spot*

“First and foremost is to learn how to land large payloads on Mars,” Wooster says. In preparation for planting an outpost on that far-off world, experiments onboard Red Dragon are set to test on-the-spot propellant production. That can be done, he says, by processing water from Mars’ surface and with gases extracted from the carbon dioxide–rich atmosphere. In fact, NASA is also set to try out something similar—a Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) on the space agency’s Mars 2020 rover.

SpaceX has been quietly working with NASA and non-NASA landing site specialists to plot locales for plopping down its spacecraft. Site selection is driven by the quantity of water the firm is looking for—thousands of tons, Wooster explains. One such spot “is looking quite promising,” he says: Arcadia Planitia, a smooth plain on Mars that appears to have large quantities of ice near the surface.
Of course, where there is ice there may well be subsurface pockets of liquid water—and potentially life, raising the possibility that SpaceX could violate “planetary protection” protocols by landing in such regions. Wooster says SpaceX is working with NASA’s Office of Planetary Protection to properly address such concerns. For now, he reiterates that the company is most definitely open for business and eager to entice researchers to make use of Red Dragon for toting their experiments. “SpaceX is a transportation company,” Wooster explains. “We’re very happy to deliver payloads to Mars for various people,” he adds, an offer that has also piqued NASA’s interest in contracting with the company to launch a potential science experiment in 2020. “We want to turn this into a steady cadence where we are sending [Red] Dragons to Mars based on every opportunity as we go forward, and eventually shifting over to our large Mars vehicle to deliver very large payloads,” he concludes.

Be it back to the moon or first footfalls on Mars, the trajectory taken by the U.S. appears to have the research community in ready-and-waiting mode. Whether it comes via government-backed public-private partnerships, international collaboration or go-it-alone endeavors by the only nation to have landed astronauts on the moon, there is plenty of extraterrestrial science that can—and will—be done, potentially even by humans.


----------



## ae1905

https://www.scientificamerican.com/video/the-10-weirdest-things-in-the-solar-system/


----------



## ae1905

*Surprising study finds that cats actually prefer people over food. - Seriously, Scien*

blogs.discovermagazine.com 

_If you’ve ever had a cat, you probably believe that, given the choice, your cat would always choose food over you. But assumptions are not always correct, which is why we test them with science! Here, scientists tested whether pet and shelter cats prefer social interaction, food, scent, or toys. They found that “although there was clear individual variability in cat preference, social interaction with humans was the most-preferred stimulus category for the majority of cats, followed by food.” Now, doesn’t that make you feel special?

_*Social interaction, food, scent or toys? A formal assessment of domestic pet and shelter cat (Felis silvestris catus) preferences

*“Domestic cats (Felis silvestris catus) engage in a variety of relationships with humans and can be conditioned to engage in numerous behaviors using Pavlovian and operant methods. Increasingly cat cognition research is providing evidence of their complex socio-cognitive and problem solving abilities. Nonetheless, it is still common belief that cats are not especially sociable or trainable. This disconnect may be due, in part, to a lack of knowledge of what stimuli cats prefer, and thus may be most motivated to work for. The current study investigated domestic cat preferences at the individual and population level using a free operant preference assessment. Adult cats from two populations (pet and shelter) were presented with three stimuli within each of the following four categories: human social interaction, food, toy, and scent. Proportion of time interacting with each stimulus was recorded. The single most-preferred stimulus from each of the four categories were simultaneously presented in a final session to determine each cat’s most-preferred stimulus overall. Although there was clear individual variability in cat preference, social interaction with humans was the most-preferred stimulus category for the majority of cats, followed by food. This was true for cats in both the pet and shelter population. Future research can examine the use of preferred stimuli as enrichment in applied settings and assess individual cats’ motivation to work for their most-preferred stimulus as a measure of reinforcer efficacy.”


----------



## ae1905

*Climate Change Makes Farmers Chase New Planting Windows*

blogs.discovermagazine.com 

A farmer climbs into a combine. _(Credit: USDA/Lance Cheung)_

Most people think of frost as a farmer’s worst nightmare. But for corn growers in Illinois, there’s little worse than a warm, soggy spring. Rainfall can soak soft prairie soils and rot the kernels before they can grow. If the rains keep farmers from their fields long enough, crop yields start to plummet. Rain can also wash away herbicides, pushing growers to apply more.

For years, this fear has driven farmers to plant earlier and earlier. Late April used to be the prime planting window. This year, weather permitting, many will begin planting this week.

Emerson Nafziger, University of Illinois extension specialist, says each year he hears stories of people planting earlier than the last. Some of those are just tales for the coffee shop, he says. This year he heard rumors of people planting in February. But he’s seen the trend himself over recent decades. Though he points out that seed treatments and high-tech farm equipment are as responsible for jumping the gun as the weather.

“Forty years ago a farmer with good conditions the first week of April almost certainly would not have planted,” he says. “It was seen as too risky. Today that’s not the case.”

These trends, along with a string of wet springs late in the last decade, prompted U.S. Department of Agriculture scientist Adam Smith to investigate how planting windows might shift even more with climate change in the years to come.

He and his colleagues used the latest climate models to see what might happen in Illinois down the road. They found spring continues to get warmer and wetter. But summers also get hotter and drier. Both of those are bad for crop yield. If the plant overheats while it’s maturing, it makes less corn. It can also freeze in the ground.

Their models show two planting seasons emerge in the future. One happens in March, as warmer winters let farmers plant earlier and earlier. The other comes between May and June, after the soggiest weather but before the heat.

“The season fragments and we start to see an early-early season, so that March starts looking like a good target for planting in the future,” he says. “In the past, March has been the bleeding edge. Nobody in their right mind would have planted then. But we’ve already seen the trend for early planting.”

Timeliness has always been vital in farming, but soon many Midwest growers will have to decide between these two contrasting strategies. Do they plant early and risk the cold, or do they plant late and risk the heat?

“There’s a clock ticking as soon as it begins to warm up in the spring and the field is plantable,” Smith says.


----------



## ae1905




----------



## ae1905

*Simulation Suggests 68 Percent of the Universe May Not Actually Exist*

science.slashdot.org 

Posted by Slashdot

boley1 quotes a report from New Atlas: 
_According to the Lambda Cold Dark Matter (Lambda-CDM) model, which is the current accepted standard for how the universe began and evolved, the ordinary matter we encounter every day only makes up around five percent of the universe's density, with dark matter comprising 27 percent, and the remaining 68 percent made up of dark energy, a so-far theoretical force driving the expansion of the universe. A new study has questioned whether dark energy exists at all, citing computer simulations that found that by accounting for the changing structure of the cosmos, the gap in the theory, which dark energy was proposed to fill, vanishes. According to the new study from Eotvos Lorand University in Hungary and the University of Hawaii, the discrepancy that dark energy was "invented" to fill might have arisen from the parts of the theory that were glossed over for the sake of simplicity. The researchers set up a computer simulation of how the universe formed, based on its large-scale structure. That structure apparently takes the form of "foam," where galaxies are found on the thin walls of each bubble, but large pockets in the middle are mostly devoid of both normal and dark matter. The team simulated how gravity would affect matter in this structure and found that, rather than the universe expanding in a smooth, uniform manner, different parts of it would expand at different rates. Importantly, though, the overall average rate of expansion is still consistent with observations, and points to accelerated expansion. The end result is what the team calls the Avera model. If the research stands up to scrutiny, it could change the direction of the study of physics away from chasing the ghost of dark energy._ "The theory of general relativity is fundamental in understanding the way the universe evolves," says Dr Laszlo Dobos, co-author of the new paper. "We do not question its validity; we question the validity of the approximate solutions. Our findings rely on a mathematical conjecture which permits the differential expansion of space, consistent with general relativity, and they show how the formation of complex structures of matter affects the expansion. These issues were previously swept under the rug but taking them into account can explain the acceleration without the need for dark energy." The study has been 
published in the Monthly Notices of the Royal Astronomical Society. You can view an animation that compares the different models 





.


----------



## ae1905

*Octopuses Edit Their Genetic Code Like No Other Animal - D-brief*

blogs.discovermagazine.com 

_(Credit: Wikimedia Commons)_

New research into the cephalopod genome is undermining our assumptions about evolution, and the role that DNA mutations play in updating a species’ physiology.
Researchers from the Marine Biological Laboratory in Woods Hole and Tel Aviv University have been studying how cephalopods — squids, octopuses, cuttlefish and nautiluses — edit their genome, and found that instead of relying on DNA mutations to adapt, they have the ability to make changes to their RNA, the genetic “messengers” that carry out the instructions written by DNA. This means that their fundamental genetic code remains largely the same from generation to generation, while changes occur at the level of the individual and don’t carry over to their offspring.

*Don’t Alter the Messenger*

In humans, less than one percent of our RNA transcripts show signs of editing, and the same holds true across most other species. In our cells, DNA instructions get copied faithfully to RNA, who then carry out their missions as instructed. Changes, if they do occur, happen at the level of the species and take generations. Cephalopods, however, have figured out how to tinker with the process of transcribing DNA to RNA, editing their genetic messages to create changes on an individual level.

Looking at a previously published octopus genome to search for signs of editing, researchers report that the level of RNA editing is about an order of magnitude higher than in primates. This means that octopuses alter the messages written by their DNA, transforming the original code into custom commands. The result is the production of novel proteins and enzymes that could potentially grant them new abilities.

Back in 2015, some of the same researchers discovered that octopuses edit their RNA more often than other species. Now, they’ve gone a step further by searching through a whole octopus genome to find where and when these edits happen and how this could affect their evolutionary history. They published their findings Thursday in _Cell_.

Many of the RNA edits occur in cephalopod brains, say the researchers, such as one adaptation that allows their neurons to function in cold environments. Octopuses are infamously smart creatures, able to open jar lids and even escape their aquariums, and the researchers say that the ability to make changes to their RNA could play a role in their intelligence. Though no definitive evidence exists, the researchers say that the effects of such RNA editing are likely “profound and complex.”

Further shoring up their claim is the discovery that nautiluses, which don’t share octopuses’ smarts, don’t rely as heavily on RNA editing. If the researchers theory is correct, being able to alter RNA could be an important factor in the species’ IQ. They still don’t, however, know what causes some bits of RNA to change after transcription while others stay the same. It’s likely not anything conscious on the part of the cephalopods, and could simply be the hand of natural selection favoring beneficial alterations to RNA.

*Evolutionary Trade-off*

What cephalopods have done, essentially, is to trade long-term, DNA-driven evolution for more immediate and individual adaptability. The researchers found that their DNA showed much lower rates of mutation than in most creatures, something they say is necessary for this type of RNA editing.

The parts of their genome that code for RNA editing are large, making up anywhere from 23 to 41 percent of protein coding sequences, depending on the species. If any of these areas get altered, they won’t be able to change their RNA anymore. So, they’ve favored immutability in this part of the genome, vastly slowing down their rate of evolution. The upside, however, is that individual cephalopod bodies can undergo relatively sweeping changes.

The new insights into cephalopod evolution have also pushed back the timeline for cephalopods. Most estimates of when a species first appeared are based on “molecular clock” analyses, which take a known rate of genetic mutation and extrapolate backwards to find when they would have first appeared. If squids and octopuses were experiencing mutations at a much lower rate, it would greatly extend their plausible history.


----------



## ae1905

*Are some wolves being ‘redomesticated’ into dogs?*

sciencemag.orgBy Virginia MorellApr. 5, 2017 , 12:00 PM








Will trash-eating wolves turn into a new kind of dog?
KenCanning/iStockphoto 

It happened thousands of years ago, and it may be happening again: Wolves in various parts of the world may have started on the path to becoming dogs. That’s the conclusion of a new study, which finds that the animals are increasingly dining on livestock and human garbage instead of their wild prey, inching closer and closer to the human world in some places. But given today’s industrialized societies, this closeness might also bring humans and wolves into more conflict, with disastrous consequences for both.

“It’s a thought-provoking study, and does a good job of laying out how diet has the potential to change a large predator,” says Lee Dugatkin, an evolutionary biologist at the University of Louisville in Kentucky, who wasn’t involved in the research.

To find out how gray wolves might be affected by eating more people food, Thomas Newsome, an evolutionary biologist at the Deakin University in Melbourne, Australia, and his colleagues examined studies of what’s happened to other large carnivores that live close to people. Asiatic lions in the Gir protected area of western India, for instance, primarily kill and eat livestock, and have grown so much less aggressive toward humans that tourists can visit them on foot. In Israel, red foxes live longer and use smaller home ranges when they rely on a diet of leftovers. In contrast, black bears in North America that dine on human garbage are more likely to die young—because people kill them.

Newsome’s 2014 study of a dingo population in Australia’s Tanami Desert showed that the wild dogs’ habit of dining almost exclusively on junk food at a waste management facility had made them fat and less aggressive. They were also more likely to mate with local dogs and had become “cheeky,” says Newsome, daring to run between his legs as he set out traps for them. Most intriguingly, the dumpster dingoes’ population formed a genetic cluster distinct from all other dingoes—indicating that they were becoming genetically isolated, a key step in forming a new species.

Is this happening to gray wolves? The conditions are ripe for it, says Newsome, noting that human foods already make up 32% of gray wolf diets around the world. The animals now mostly range across remote regions of Eurasia and North America, yet some are returning to developed areas. Wolves in Greece primarily consume pigs, goats, and sheep; those in Spain feed mainly on ponies and other livestock; and Iranian wolves rarely eat anything other than chickens, domestic goats, and garbage. “Based on what’s happened to these other carnivores [that eat human foods], we think these wolves will change,” Newsome says.

The wolves’ new diet could affect everything from the size of their packs to their social behaviors, the team reports today in Bioscience. Like the dingoes, these wolves will probably mate with more dogs and, in North America, with coyotes, the researchers say. Newsome expects that they will also begin to diverge genetically from prey-hunting wolves, just as the dumpster dingoes did. Because ancient wolves are believed to have evolved into dogs by eating food and garbage at human camps, we may also be seeing “the makings of a new dog” today, hypothesizes Newsome, who plans to begin testing his idea with wolves in Washington state.

Not everyone is convinced. “I doubt if we’re domesticating wolves that eat human-sourced food,” says Robert Wayne, an evolutionary biologist and expert on canine genetics at the University of California, Los Angeles. “That diet is more likely to get them killed.” Unlike the trash-picking dingoes, which reduced their territories, wolves still range so widely that garbage-eaters are less likely to become genetically isolated from the rest of their population, he says. Bobcats, coyotes, and other animals that are already well-integrated in our neighborhoods are more likely to become domesticated, he adds.

Wayne and Newsome agree that for all these species, the best outcome isn’t domestication, but restoration of their habitats and natural prey in places where they can avoid people, livestock, and trash. If humans can arrange that, we won’t have a new dog, Newsome says. But we’ll still have wolves.


----------



## ae1905

*Neuroscientists identify brain circuit necessary for memory formation*

news.mit.edu

When we visit a friend or go to the beach, our brain stores a short-term memory of the experience in a part of the brain called the hippocampus. Those memories are later “consolidated” — that is, transferred to another part of the brain for longer-term storage.

A new MIT study of the neural circuits that underlie this process reveals, for the first time, that memories are actually formed simultaneously in the hippocampus and the long-term storage location in the brain’s cortex. However, the long-term memories remain “silent” for about two weeks before reaching a mature state.

“This and other findings in this paper provide a comprehensive circuit mechanism for consolidation of memory,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, the director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory, and the study’s senior author.

The findings, which appear in _Science_ on April 6, may force some revision of the dominant models of how memory consolidation occurs, the researchers say.

The paper’s lead authors are research scientist Takashi Kitamura, postdoc Sachie Ogawa, and graduate student Dheeraj Roy. Other authors are postdocs Teruhiro Okuyama and Mark Morrissey, technical associate Lillian Smith, and former postdoc Roger Redondo.

*Long-term storage*
Beginning in the 1950s, studies of the famous amnesiac patient Henry Molaison, then known only as Patient H.M., revealed that the hippocampus is essential for forming new long-term memories. Molaison, whose hippocampus was damaged during an operation meant to help control his epileptic seizures, was no longer able to store new memories after the operation. However, he could still access some memories that had been formed before the surgery.

This suggested that long-term episodic memories (memories of specific events) are stored outside the hippocampus. Scientists believe these memories are stored in the neocortex, the part of the brain also responsible for cognitive functions such as attention and planning.

Neuroscientists have developed two major models to describe how memories are transferred from short- to long-term memory. The earliest, known as the standard model, proposes that short-term memories are initially formed and stored in the hippocampus only, before being gradually transferred to long-term storage in the neocortex and disappearing from the hippocampus.

A more recent model, the multiple trace model, suggests that traces of episodic memories remain in the hippocampus. These traces may store details of the memory, while the more general outlines are stored in the neocortex.

Until recently, there has been no good way to test these theories. Most previous studies of memory were based on analyzing how damage to certain brain areas affects memories. However, in 2012, Tonegawa’s lab developed a way to label cells called engram cells, which contain specific memories. This allows the researchers to trace the circuits involved in memory storage and retrieval. They can also artificially reactivate memories by using optogenetics, a technique that allows them to turn target cells on or off using light.

In the new _Science_ study, the researchers used this approach to label memory cells in mice during a fear-conditioning event — that is, a mild electric shock delivered when the mouse is in a particular chamber. Then, they could use light to artificially reactivate these memory cells at different times and see if that reactivation provoked a behavioral response from the mice (freezing in place). The researchers could also determine which memory cells were active when the mice were placed in the chamber where the fear conditioning occurred, prompting them to naturally recall the memory.

The researchers labeled memory cells in three parts of the brain: the hippocampus, the prefrontal cortex, and the basolateral amygdala, which stores memories’ emotional associations.

Just one day after the fear-conditioning event, the researchers found that memories of the event were being stored in engram cells in both the hippocampus and the prefrontal cortex. However, the engram cells in the prefrontal cortex were “silent” — they could stimulate freezing behavior when artificially activated by light, but they did not fire during natural memory recall.

“Already the prefrontal cortex contained the specific memory information,” Kitamura says. “This is contrary to the standard theory of memory consolidation, which says that you gradually transfer the memories. The memory is already there.”

Over the next two weeks, the silent memory cells in the prefrontal cortex gradually matured, as reflected by changes in their anatomy and physiological activity, until the cells became necessary for the animals to naturally recall the event. By the end of the same period, the hippocampal engram cells became silent and were no longer needed for natural recall. However, traces of the memory remained: Reactivating those cells with light still prompted the animals to freeze.

In the basolateral amygdala, once memories were formed, the engram cells remained unchanged throughout the course of the experiment. Those cells, which are necessary to evoke the emotions linked with particular memories, communicate with engram cells in both the hippocampus and the prefrontal cortex.

*Theory revision*
The findings suggest that traditional theories of consolidation may not be accurate, because memories are formed rapidly and simultaneously in the prefrontal cortex and the hippocampus on the day of training.

“They’re formed in parallel but then they go different ways from there. The prefrontal cortex becomes stronger and the hippocampus becomes weaker,” Morrissey says.

“This paper shows clearly that from the get-go, engrams are formed in the prefrontal cortex,” says Paul Frankland, a principal investigator in the Neurobiology Laboratory at the Hospital for Sick Children in Toronto, who was not involved in the study. “It challenges the notion that there’s a movement of the memory trace from the hippocampus to the cortex, and makes the point that these circuits are engaged together at the same time. As the memories age, there’s a shift in the balance of which circuit is engaged as a memory is recalled.”

Further studies are needed to determine whether memories fade completely from hippocampal cells or if some traces remain. Right now, the researchers can only monitor engram cells for about two weeks, but they are working on adapting their technology to work for a longer period.

Kitamura says he believes that some trace of memory may stay in the hippocampus indefinitely, storing details that are retrieved only occasionally. “To discriminate two similar episodes, this silent engram may reactivate and people can retrieve the detailed episodic memory, even at very remote time points,” he says.

The researchers also plan to further investigate how the prefrontal cortex engram maturation process occurs. This study already showed that communication between the prefrontal cortex and the hippocampus is critical, because blocking the circuit connecting those two regions prevented the cortical memory cells from maturing properly.

The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.


----------



## ae1905

*The Arctic Ocean Is Becoming More Like the Atlantic Ocean*

scientificamerican.com Brian Kahn,Climate Central

The Arctic is undergoing an astonishingly rapid transition as climate change overwhelms the region.

New research sheds light on the latest example of the changes afoot, showing that parts of the Arctic Ocean are becoming more like the Atlantic. Warm waters are streaming into the ocean north of Scandinavia and Russia, altering ocean productivity and chemistry. That’s making sea ice recede and kickstarting a feedback loop that could make summer ice a thing of the past.

“2015 was a really anomalous year when we had problems finding a suitable ice flow to launch our drifting buoys,”Igor Polyakov, an oceanographer at the University of Alaska who led the new study, said. “(There was) nothing like that in the past, and it became a motivation to our analysis: why was ice in 2015 so rotten? What drives this huge change?”

The findings, published in Science on Thursday, show that while warming air has a role to play, processes are playing out in the ocean itself that are fundamentally altering the region.

Those changes will have impacts on the people, plants and animals that call the Arctic home. They could also create more geopolitical tension as resources previously locked under ice become available and shipping lanes open up.

In the east Arctic Ocean, the shift is manifesting itself in changing the layers of the ocean. There’s a cap of cold, less salty water that covers the eastern portion of the Arctic Ocean. Underneath it sits a pool of warm, salty Atlantic water that until recently hasn’t been able to find a way to surface. That stratification of layers has kept ice relatively safe from its warm grip.

The ocean has become gradually less stratified since the 1970s. Using data from buoys and satellites, Polyakov and his colleagues have found a more marked shift over the past decade and a half. Since 2002, the difference in water temperatures between the layers has dropped by about 2°F.

In winter from 2013-2015, the cap separating the deep water and surface water disappeared completely in some locations, allowing the warm Atlantic waters to reach the surface and cut further into sea ice pack. At the same time, warm air has further reduced sea ice, which is allowing still more mixing of the ocean layers.

The result is a feedback loop that is essentially turning roughly a third of the eastern Arctic Ocean into something resembling the ice-free Atlantic Ocean.

“Rapid changes in the eastern Arctic Ocean, which allow more heat from the ocean interior to reach the bottom of sea ice, are making it more sensitive to climate changes,” Polyakov said. “This is a big step toward the Arctic with seasonal sea-ice cover.”

The changes are already apparent in the region, which has largely been ice-free during the summer since 2011. The sea ice winter maximum, which has set a record low for three years running, has been largely driven by a lack of ice in the eastern Arctic.

Polyakov said he’s seen the rapid changes in ice firsthand. When they first put buoys in the eastern Arctic in 2002, researchers had to reach the sites on heavy icebreakers.

“Now we can reach them using an ice class ship,” he said. Ice class ships are not necessarily as reinforced as icebreakers.

The sea ice changes are having profound impacts outside of researchers’ ability to access more remote sites. Other research published earlier this week in Science Advances shows that thinning sea ice is allowing phytoplankton to bloom across the region.

Phytoplankton are tiny plants, and like your average potted plant, they need sunlight to bloom. Sea ice has been thick enough to prevent that from happening until very recently. The new findings show that over the past decade, up to 30 percent of the Arctic has become primed for summer blooms.

“Both of our results show the Arctic becoming a very different place than it has been in the past,” Christopher Hovart, an oceanographer at Harvard who led the plankton study, said. “Water pathways are changing, the ecology is changing, all driven by the declining sea ice field.”

_This article is reproduced with permission from Climate Central. The article was first published on April 6, 2017._


----------



## ae1905

*The Science of Problem-Solving*

blogs.scientificamerican.com 

Ulrich Boser

For Gurpreet Dhaliwal, just about every decision is a potential opportunity for effective problem solving. What route should he take into the office? Should Dhaliwal write his research paper today or next week? "We all do problem solving all day long," Dhaliwal told me.

An emergency medicine physician, Dhaliwal is one of the leaders in a field known as clinical reasoning, a type of applied problem solving. In recent years, Dhaliwal has mapped out a better way to solve thorny issues, and he believes that his problem solving approach can be applied to just about any field from knitting to chemistry.

For most of us, problem solving is one of those everyday activities that we do without much thought. But it turns out that many common approaches like brainstorming don’t have much research behind them. In contrast, practices that might seem a little odd—like talking to yourself—can be pretty effective.

I came across the new research on problem solving as part of my reporting on a book on the science of learning, and it was mathematician George Polya who first established the field, detailing a four-step approach to cracking enduring riddles.







For Polya, the first phase of problem solving is “understanding.” In this phase, people should look to find the core idea behind a problem. “You have to understand the problem,” Polya argued. “What is the unknown? What are the data?”

The second phase is “devising a plan,” in which people map out how they’d address the problem. “Find the connection between the data and the unknown,” Polya counseled. 
The third phase of problem solving is “carrying out the plan.” This is a matter of doing—and vetting: “Can you prove that it is correct?”

The final phase for Polya is “looking back.” Or learning from the solution: People should "consolidate their knowledge.”

While Dhaliwal broadly follows this four-step method, he stresses that procedures are not enough. While a focused method is helpful, thorny issues don’t always fit nicely into categories.

This idea is clear in medicine. After all, symptoms rarely match up perfectly with an illness. Dizziness can be the signal of something serious—or a symptom of a lack of sleep. “What is tricky is to figure out what’s signal and what’s noise,” Dhaliwal told me.
In this regard, Dhaliwal argues that what’s at the heart of effective problem solving is making a robust connection between the problem and the solution. "Problem solving is part craft and part science, " Dhaliwal says, a type of "matching exercise. "

To get a sense of Dhaliwal’s approach, I once watched him solve a perplexing case. It was at a medical conference, and Dhaliwal stood at a dais as a fellow doctor explained the case: Basically, a man came into ER one day—let’s call him Andreas—and he spat up blood, could not breath very well, and had a slight fever.

At the start of the process, Dhaliwal recommends developing a one-sentence description of the problem. "It’s like a good Google search,” he said. “You want a concise summary,” and in this case, it was: Sixty-eight-year-old man with hemoptysis, or coughing up blood.

Dhaliwal also makes a few early generalizations, and he thought that Andreas might have a lung infection or an autoimmune problem. There wasn’t enough data to offer any sort of reliable conclusion, though, and really Dhaliwal was just gathering information.
Then came an x-ray, an HIV test, and as each bit of evidence rolled in, Dhaliwal detailed various scenarios, assembling the data in different ways. “To diagnosis, sometimes we are trying to lump, and sometimes trying to split,” he said.

Dhaliwal’s eyes flashed, for instance, when it became apparent that Andreas had worked in a fertilizer factory. It meant that Andreas was exposed to noxious chemicals, and for a while, it seemed like a toxic substance was at the root of Andreas’s illness.

Dhaliwal had a few strong pieces of evidence that supported the theory including some odd-looking red blood cells. But Dhaliwal wasn't comfortable with the level of proof. “I'm like an attorney presenting in a court of law,” Dhaliwal told me. “I want evidence.”
As the case progressed, Dhaliwal came across a new detail, and there was a growth in the heart. This shifted the diagnosis, knocking out the toxic chemical angle because it doesn't spark tumors.

Eventually, Dhaliwal uncovered a robust pattern, diagnosing Andreas with a cardiac angiosarcoma, or heart cancer. The pattern best explained the problem. “Diagnosing often comes down the ability to pull things together,” he said.

Dhaliwal doesn’t always get the right answer. But at the same time, it was clear that a more focused approach to problem solving can make a clear difference. If we’re more aware of how we approach an issue, we are better able to resolve the issue.

This idea explains why people who talk to themselves are more effective at problem solving. Self-queries—like _is there enough evidence?_—help us think through an issue.
As for Dhaliwal, he had yet another problem to solve after his diagnosis of Andreas: Should he take an Uber to the airport? Or should he grab a cab? After a little thought, Dhaliwal decided on an Uber. It was likely to be cheaper and equally comfortable. In other words, it was the solution that best matched the problem.


----------



## Amine

Most interesting science story to me is that of the Demon Core. And criticality accidents in general. Wikipedia has a page on them. Crazy shit.


----------



## zynthaxx

https://arstechnica.com/science/201...rths-early-atmosphere-yields-all-4-rna-bases/

Apparently, hit the atmosphere hard enough with something big enough, and you get something cooler than the Miller-Urey experiment.


----------



## ae1905

*Hydrothermal Vents on Enceladus Hint at Life Beyond Earth*

blogs.discovermagazine.com 

Enceladus _(Credit: NASA/JPL-Caltech/Space Science Institute) _

In 1977, a group of marine researchers discovered something they’d only before theorized: cracks in the ocean floor releasing heat, warming up (and often boiling) the ocean around it. They also found mollusks in them, and subsequent vents have yielded heat resistant microbes, giant tube worms, and more fantastic creatures living in what are essentially small, underwater volcanoes.

Now, NASA has announced that they have indirect evidence for hydrothermal vents beyond Earth. In its encounters with Saturn’s moon Enceladus, the Cassini craft found chemicals associated with these events. The results were published today in _Science__. _It adds to the body of evidence that Enceladus could be ripe for life.

“Enceladus is too small to have retained the hydrogen from when it formed, so the hydrogen we see today is coming from inside Enceladus*,*” Linda Spilker, project scientist on the Cassini mission, said in a press conference.

Enceladus, which is a tiny moon, took Cassini researchers by surprise when they discovered what seemed to be geysers of water shooting from the south pole in 2005. Subsequent investigations built a picture of the origin: liquid water under the surface of Enceladus, which led to the idea of an entire subsurface ocean. The heating mechanism, to date, has not been discovered.
A high enhanced image of Saturn’s moon Enceladus, taken in 2005, backlit by the sun show the fountain-like sources of the fine spray of material that towers over the south polar region._ (Credit: NASA/JPL/Space Science Institute) _

Cassini’s Ion Neutral Mass Spectrometer made the observation of molecular hydrogen in the ejecta from these geysers. According to principal investigator Hunter Waite of the Southwest Research Institute and his co-investigators, the source almost certainly has to be hydrothermal vents at Enceladus’ sea floor. This means there’s plenty of geological activity, increasing the chances for life.
Indeed, researchers published a paper last year suggesting that hydrothermal vents were the source of life on Earth, where chemical reactions fed these early microbes. If that’s the case on Enceladus, the ocean may have microbial life at the very least. 

“The hydrogen could be a potential source of chemical energy for any microbes living in Enceladus’ ocean,” Spilker says.

Of course, it may be years or even decades until we know for sure — in September, NASA will intentionally crash Cassini into Saturn to make sure it doesn’t crash land into Titan or Enceladus and accidentally contaminate either potentially habitable moon with Earth bacteria.
This post originally appeared on _Astronomy.com_.


----------



## ae1905

*Magnetic Maps Behind one of Nature’s Craziest Migrations*

blogs.discovermagazine.com 

Young eels, or elvers, migrate from their ocean hatcheries to brackish waters where they mature. (_Credit: Maryland Fisheries Resources Office, USFWS_)

In the middle of the Atlantic Ocean, there’s an enormous patch of seaweed that’s perplexed sailors for centuries: the Sargasso Sea. This strange place is where American and European eels go to breed. Once born, the little eels — called elvers — have to venture toward land.

American eels live out their lives — which can be more than a decade — just off the eastern seaboard. Their cousins across the pond live everywhere from Scandinavia to North Africa. Then, at the ends of their lives, both species journey thousands of miles out to sea to lay their eggs.

It’s a truly remarkable journey. And scientists have tracked this migration for the first time ever in recent years. For a century, the eels’ path was a mystery. They left the East Coast and just magically appeared in the Sargasso Sea. To crack the case, researchers had to figure out how to attach pop-up satellite tags to the eels that wouldn’t kill them during the sometimes 1,500-mile swim. The researchers figured out that the eels use ocean currents to hitch a ride to their chosen coast.
After their birth in the Sargasso Sea, American and European Eels migrate toward land. (_Credit: USFWS_)

And in a paper published Thursday in Current Biology, scientists have announced another discovery into how they do it. By studying European eels, the researchers figured out these eels actually have a magnetic map. Rather then guiding the fierce little swimmers toward land, the “map sense” steers them toward the Gulf Stream, which offers an easier ride toward Europe.

“We were not surprised to find that eels have a magnetic map, but we were surprised to discover how well they can detect subtle differences in magnetic fields,” said University of North Carolina, Chapel Hill scientist Lewis Naisbett-Jones. “We were even more surprised when our ocean simulation models revealed that the little eels use their map not so much to locate Europe, but to target a big conveyor belt — the Gulf Stream — that will take them there. Presumably, a little bit of work (i.e., swimming) helps increase their chances of catching a mostly free ride to their destination.”

The team figured this out by dropping elvers into an experimental apparatus that produced magnetic fields mimicking those experienced along the animals’ migratory path. The scientists simply dropped the baby eels into their contraption and watched which way they swam. Then they used computer models of ocean currents to simulate how their results would play out in the real world. Elvers that swam even vaguely in the right direction would have a much higher chance of reaching the Gulf Stream, scientists found. This sixth magnetic sense put them in good company alongside sea turtles and salmon.

However, the elvers’ journey toward land is actually just the start. There’s a major obstacle once they get there. Elver fishing is booming. American anglers can get thousands of dollars per pound for the well-traveled and tiny eels. Once sold, the elvers are raised to adulthood and sold for sushi — presumably to customers totally unaware of the strange lives these eels have had.


----------



## ae1905

*Dolphin's-Eye Video Is Breathtaking, Barfy*

blogs.discovermagazine.com 

It’s surprisingly hard to stick a camera to a dolphin. Surprising, anyway, when you consider the other animals that have carried monitoring devices down into the ocean for human scientists: sharks, sea turtles, birds, manatees, even whales. When a group of researchers recently overcame the challenges and created a camera that dolphins can wear, they were inducted into a dizzying underwater world.

Scientists may attach instruments to marine animals to do environmental research, as with seals that gather climate data. Or they may be interested in the animals themselves—for example, the ideal fatness for elephant seals. 

The information camera-carrying animals send back can help researchers better understand how these species behave and how to protect them.
But attaching a camera to an animal is easiest when you can capture the creature and hold it still. Alternately, if you’re studying a great big animal with a flattish back, you might be able to slap on your instrument as it swims past your boat. Small, fast-swimming dolphins are tougher. Before now, no one had ever gotten a video camera onto a small cetacean, writes University of Alaska Southeast biologist Heidi Pearson in _Marine Biology_.

Pearson and her coauthors tackled the problem with dusky dolphins. This species grows to a little less than six feet long. A population of dusky dolphins off the coast of New Zealand made good subjects because these animals are already used to visits from scientists and tourists. They’re also abundant—they swim in groups of up to 1,000 during the day.

The researchers developed a tool they call C-VISS, for “cetacean-borne video camera and integrated sensor system.” It includes a miniature video camera, a tool to monitor the animal’s depth, four suction cups, a flotation device so it pops back up to the surface when it comes loose, and transmitters so researchers can track it down again afterward.

Then they started trying to attach it. The researchers brought their boat alongside a group of swimming dolphins and used a specially built pole to hold out the C-VISS. The device was stuck to the end of the pole with Velcro; if a person got the device successfully suctioned onto a swimming animal’s back, the Velcro would tear away.

The method wasn’t surefire. The researchers did five trials, and in their _most _successful trial they only stuck the camera onto a dolphin 8 percent of the time. But that was enough to generate almost 9 hours of video footage.

You can watch one brief video here. It’s a pretty exciting ride. The nauseous up-and-down movement of the camera might be annoying—until you recall that you’re riding along with an animal doing, well, a dolphin kick. You can see some of the other dolphins in the group, and if you watch closely you’ll spot a baby dolphin swimming alongside its mother.

This type of camera could give all kinds of useful insights into dolphin life, the researchers say. In their footage, they saw both friendly flipper rubbing and more-than-friendly sexual behavior. (Allegedly, that’s in panel d of the figure above.) The camera also caught some of the animals that dolphins were preying on, as well as the plants in their habitats.
Crucially, the researchers also saw that the cameras didn’t change dolphins’ behavior. The animals tended to swim away right after getting slapped with a camera, but they didn’t act panicked. Afterward, they behaved the same as the rest of their group. This is important because scientists don’t want to endanger animals with their equipment—or get results that don’t match what a dolphin does in its everyday life.

Next, the scientists hope to improve their device by shrinking it further and adding more tools. And, presumably, by making something they can attach on the first or second try.


----------



## ae1905




----------



## ae1905

*Why Felines Can't Resist the #CatSquare*

blogs.discovermagazine.com[HR][/HR]







Next best thing to a hidey-hole box? _(Credit: Maggie Villiger, CC BY-ND)_

Twitter’s been on fire with people amazed by cats that seem compelled to park themselves in squares of tape marked out on the floor. These felines appear powerless to resist the call of the #CatSquare. 

This social media fascination is a variation on a question I heard over and over as a panelist on Animal Planet’s “America’s Cutest Pets” series. I was asked to watch video after video of cats climbing into cardboard boxes, suitcases, sinks, plastic storage bins, cupboards and even wide-necked flower vases.

“That’s so cute … but why do you think she does that?” was always the question. It was as if each climbing or squeezing incident had a completely different explanation.

It did not. It’s just a fact of life that cats like to squeeze into small spaces where they feel much safer and more secure. Instead of being exposed to the clamor and possible danger of wide open spaces, cats prefer to huddle in smaller, more clearly delineated areas.
Kittens get securely snuggled by their mothers. _(Credit: Shutterstock) _

When young, they used to snuggle with their mom and litter mates, feeling the warmth and soothing contact. Think of it as a kind of swaddling behavior. The close contact with the box’s interior, we believe, releases endorphins – nature’s own morphine-like substances – causing pleasure and reducing stress.

Along with Temple Grandin, I researched the comforting effect of “lateral side pressure.” We found that the drug naltrexone, which counteracts endorphins, reversed the soporific effect of gentle squeezing of pigs. Hugs, anyone?

Also remember that cats make nests – small, discrete areas where mother cats give birth and provide sanctuary for their kittens. Note that no behavior is entirely unique to any one particular sex, be they neutered or not. Small spaces are in cats’ behavioral repertoire and are generally good (except for the cat carrier, of course, which has negative connotations – like car rides or a visit to the vet).

One variation on this theme occurs when the box is so shallow that it does not provide all the creature comforts it might.

Or then again, the box may have no walls at all but simply be a representation of a box – say a taped-in square on the ground. This virtual box is not as good as the real thing but is at least a representation of what might be – if only there was a real square box to nestle in.
This virtual box may provide some misplaced sense of security and psychosomatic comfort.
The cats-in-boxes issue was put to the test by Dutch researchers who gave shelter cats boxes as retreats. According to the study, cats with boxes adapted to their new environment more quickly compared to a control group without boxes: The conclusion was that the cats with boxes were less stressed because they had a cardboard hidey-hole to hunker down in.
Availability of a cozy box is part of a well-appointed space for a cat. _(Credit: Lisa Norwood, CC BY-NC) _

Let this be a lesson to all cat people – cats need boxes or other vessels for environmental enrichment purposes. Hidey-holes in elevated locations are even better: Being high up provides security and a birds’s-eye view of the world, so to speak.

Without a real box, a square on the ground may be the next best thing for a cat, though it’s a poor substitute for the real thing. Whether a shoe box, shopping bag or a square on the ground, it probably gives a cat a sense of security that open space just can’t provide.
Nicholas Dodman, Professor Emeritus of Behavioral Pharmacology and Animal Behavior, Cummings School of Veterinary Medicine, _Tufts University
_
_This article was originally published on The Conversation. Read the original article.




https://twitter.com/LisaGray_HouTX/timelines/854383119767662597
_


----------



## ae1905

*Giant Virus Found in Sewage Blurs the Line Between Life and Non-Life*

blogs.discovermagazine.com 
An artist’s rendition of the newly discovered Klosneuvirus (Image: NIH / Ella Maru studio)

In most biology textbooks, there’s a clear separation between the three domains of cellular organisms – Bacteria, Archaea, and Eukaryotes – and viruses. This fault line is also typically accepted as the divider between life and non-life: since viruses rely on host machinery to enact metabolic transformations and to replicate, they are not self-sufficient, and generally not considered living entities.

But several discoveries of giant viruses over the last decade have blurred this distinction. Some viruses are even larger and contain more genes than typical microbes like _E. coli_. Ultra-small bacteria detected in filtered groundwater from Rifle, Colorado are moving the goalposts from the opposite end, leading to a virus-microbe continuum in which distinguishing one from the other isn’t so straightforward. Among the alluring interpretations: giant viruses could be indicative of a fourth domain of life.

A recent study led by Frederik Schulz at the Department of Energy’s Joint Genome Institute blurs the virus-microbe line even further. While assembling a metagenome from sewage sludge in Klosterneuburg, Austria, Schulz found several genes that all mapped back to the same unknown virus, genes that until now have only been associated with free-living cells.

The particle – named Klosneuvirus – is still a virus, given its other genes and outer coat, but its 1.57 million-base genome allows a greater degree of autonomy than many of its viral relatives. Most notably, they have a relatively complete complement of protein-making machinery, which would reduce the dependence on host cells to do their bidding. For example, most viruses lack aminoacyl tRNA synthetase enzymes, which shuttle amino acids onto transfer RNA molecules; these in turn make their way to the ribosome, dropping off their cargo to build proteins from the chains of amino acids. While some previously discovered giant viruses have seven of the 20 aminoacyl tRNA synthetases, Klosneuvirus has 19, making it almost entirely independent of host involvement in protein synthesis. (It’s also worth noting that autonomy is not a requirement for cellular life, either: many microbes are “auxotrophic,” meaning they depend on external input of organics – often amino acids – in order to survive.)

So could this sophisticated, rule-breaking giant virus indeed be a sign of the mythical fourth domain? To find out, the team compared Klosneuvirus’s aminoacyl tRNA synthetase sequences with other forms of the enzymes across the tree of life. The results were all over the place, with each synthetase showing closest similarity to a different organism (mostly algae). In the ever hyperbolic language of scientific journalese, Schulz notes that “these findings are incompatible with the fourth domain hypothesis…and instead imply piecemeal acquisition of these genes by giant viruses.” The synthetases don’t seem to have evolved together, from the same branch point and within the same organism; rather, they were scooped up by an opportunisitc virus and incorporated into an increasingly mature metabolic network.

As suggested by previous revelations of giant viruses, Klosneuvirus is likely just the beginning of a more thorough reconfiguration of the tree of life. After the intriguing result from the sewage treatment plant in Austria, Schulz looked for genomes of similar viruses, lurking in previously obtained metagenomes from around the world. He found three more – enough to propose a new subfamily, the _Klosneuvirinae_ – the latest links in the chain connecting viruses and the three domains of cellular life.


----------



## ae1905

*Scientists have worked out how dung beetles use the Milky Way to hold their course*

theconversation.com 

James Foster

Insects navigate in much the same way that ancient humans did: using the sky. Their primary cue is the position of the sun, but insects can also detect properties of skylight (the blue light scattered by the upper atmosphere) that give them indirect information about the sun’s position. Skylight cues include gradients in brightness and colour across the sky and the way light is polarised by the atmosphere. Together, these sky “compass cues” allow many insect species to hold a stable course.

At night, as visual cues become harder to detect, this process becomes more challenging. Some can use the light of the moon but one insect, the nocturnal dung beetle _Scarabaeus satyrus_, uses light from the Milky Way to orient itself. To find out exactly how this process works, my colleagues and I constructed an artificial Milky Way, using LEDs, to test the beetles’ abilities. We found that they rely on the difference in brightness between different parts of the Milky Way to work out which way to go.

_Scarabaeus satyrus_ holds its course with apparent ease every night. They take to the air at dusk in the African Savanna, in search of the fresh animal droppings on which they feed. But they are not alone and, to escape competition from other dung beetles, they construct a piece of dung into a ball and roll it a few meters away from the dung pile before burying and consuming it.







Where am I supposed to take this thing? Shutterstock To avoid returning to their starting point, they maintain a straight path while rolling their ball. Scientists discovered that the beetles could do this even on moonless clear nights. So in 2009, a group of researchers took some beetles on a trip to the planetarium in Johannesburg, and watched them try to orient themselves under different star patterns.

They found the beetles could hold their course well when the planetarium displayed just the Milky Way, the streak of light across the night sky produced by the disc-shaped arrangement of the stars in our galaxy. But the beetles became disoriented when only the brightest stars in the sky were shown.

What was still unclear was exactly what kind of compass cue the beetles extracted from the Milky Way. We knew, for example, that night-migrating birds learn the constellations surrounding the sky’s northern centre of rotation, much as sailors did before the advent of modern navigation systems. These constellations remain in the northern part of the sky as the Earth rotates, and so are a reliable reference for north–south journeys.

The planetarium experiments had shown that the beetles don’t use constellations of bright stars, but perhaps they could learn patterns within the Milky Way instead. My colleagues and I then proposed that the beetles might perform a brightness comparison, identifying either the brightest point in the Milky Way or a brightness gradient across the sky that is influenced by the Milky Way.

We used our artificial night sky to test this theory, constructing a simplified Milky Way streak that simulated different patterns of stars and brightness gradients. We found that the beetles became lost when given a pattern of stars within the artificial Milky Way. The beetles only maintained their heading when the two sides of the steak differed in brightness. 

This suggests nocturnal beetles do not use the intricate star patterns within the Milky Way as their compass cue, but instead identify a brightness difference across the night sky to set their heading. This is similar to what their day-active relatives do when the sun is not visible but instead orient themselves using the brightness gradient of the daytime sky.







Night-time compass. Shutterstock This brightness-comparison strategy may be less sophisticated than the way birds and human sailors identify specific constellations, but it’s an efficient solution to interpreting the complex information present in the starry sky—given how small the beetles’ eyes and brains are. In this way, they overcome the limited bandwidth of their information processing systems and do more with less, just as humans have learnt to do with technology.

This straightforward brightness comparison strategy is particularly effective over short distances. So although _Scarabaeus satyrus_ is the only species known to hold its course in this way, the technique may also be used by many other nocturnal animals that perform short journeys at night.


----------



## ae1905




----------



## ae1905

from the citizen science files:

bbc.com *Aurora photographers find new night sky lights and call them Steve*








Image copyright ESA A group of aurora enthusiasts have found a new type of light in the night sky and named it Steve.

Eric Donovan from the University of Calgary in Canada spotted the feature in photos shared on a Facebook group.

He did not recognise it as a catalogued phenomenon and although the group were calling it a proton arc, he knew proton auroras were not visible.

Testing showed it appeared to be a hot stream of fast-flowing gas in the higher reaches of the atmosphere.
The European Space Agency (ESA) sent electric field instruments to measure it 300km (190 miles) above the surface of the Earth and found the temperature of the air was 3,000C (5,400F) hotter inside the gas stream than outside it.

Inside, the 25km-wide ribbon of gas was flowing at 6 km/s (13,000mph), 600 times faster than the air on either side.

Relatively little else is known about the big purple light as yet but it appears it is not an aurora as it does not stem from the interaction of solar particles with the Earth's magnetic field.

There are reports that the group called it Steve in homage to a 2006 children's film, Over the Hedge, where the characters give the name to a creature they have not seen before.

Roger Haagmans of the ESA said: "It is amazing how a beautiful natural phenomenon, seen by observant citizens, can trigger scientists' curiosity.

"It turns out that Steve is actually remarkably common, but we hadn't noticed it before. "It's thanks to ground-based observations, satellites, today's explosion of access to data and an army of citizen scientists joining forces to document it."


----------



## ae1905

The Atlantic 


Portraits of the Earth-Moon System 

Alan Taylor
1:36 PM ET
 
The Earth and its moon almost form a binary planet system. The moon is enormous—relative to the size of its planet—compared with the rest of the solar system. Since the 1960s, spacecraft and astronauts have been able to “step back” far enough to capture combined portraits of the Earth and its moon, separated by some 240,000 miles. Gathered below are some of the best of these portraits, some from as far away as 100 million miles.















The Earth straddling the limb of the moon, as seen from above Compton crater by NASA's Lunar Reconnaissance Orbiter on October 12, 2015. The large tan area in the upper right is the Sahara desert, and just beyond is Saudi Arabia. The Atlantic and Pacific coasts of South America are visible to the left. # 
NASA / GSFC / Arizona State University 


  
  
  
 








An image of Earth and the moon, acquired on October 3, 2007, by the HiRISE camera orbiting Mars on NASA's Mars Reconnaissance Orbiter. At the time the image was taken, Earth was 142 million kilometers (88 million miles) from Mars. The phase angle is 98 degrees, which means that less than half of the disk of the Earth and the disk of the moon have direct illumination. We could image Earth and moon at full disk illumination only when they are on the opposite side of the sun from Mars, but then the range would be much greater and the image would show less detail. # 
JPL-Caltech / University of Arizona / NASA 


  
  
  
 








Observing the moon from Earth orbit, aboard the International Space Station over the western Atlantic, on September 26, 2007. # 
NASA 


  
  
  
 








A crew member aboard the International Space Station took this image of the northern Mediterranean Sea, centered on the island of Elba, with city lights of the Italian towns of Piombino and Punta Ala image right. Shooting towards the reflection of the moon on the sea surface, moonglint reveals the highly complex patterns on the sea surface—in the night equivalent of sunglint. The strongest reflection is near the center of the moon’s disc, which brightens the sea surface around the island of Elba. But in the complex patterns seen from space, the dark areas of the sea surface even make the islands like Elba, Montecristo (lower left) and Pianosa (left) more difficult to see. Photographed on October 17, 2013. # 
NASA 


  
  
  
 








Long before man journeyed to the moon and looked back at the tiny, fragile planet that houses humanity, remote orbiters were sending back pictures of home. Sent to scope out potential landing sites on the moon, NASA's series of five Lunar Orbiters also sent back the earliest views of Earth from another celestial body. This image, taken in 1966 by Lunar Orbiter 1, is among the first views of Earth from the moon. When the orbiter sent back the data in 1966, the technology did not exist to produce a full-resolution image. For decades, the image existed as a grainy black-and-white photo. More than forty years later, NASA recreated the image from the original data, producing for the first time a high-resolution view of the moon and Earth from the Lunar Orbiter Missions. The image was released on November 13, 2008. # 
NASA / Lunar Orbiter Image Recovery Project at NASA Ames Research Center 


  
  
  
 








Earth and the far side of the moon on July 5, 2016, also featuring Typhoon Nepartak over the Pacific Ocean, imaged by NASA’s Deep Space Climate Observatory (DSCOVR) satellite, about 1.5 million km (930,000 mi) from Earth,. # 
NASA 


  
  
  
 








Young people look at the rare sight of the setting sun appearing as crescent as the moon moves in alignment between the Sun and the Earth during a partial solar eclipse, as seen from Manila Bay on January 26, 2009. # 
Gil Nartea / AFP / Getty 


  
  
  
 








Earth viewed over the lunar horizon, as seen from Japan Aerospace Exploration Agency's SELENE lunar orbiter, on October 7, 2016. # 
 JAXA / NHK 


  
  
  
 








On December 16, 1992, 8 days after its encounter with Earth, the Galileo spacecraft looked back from a distance of about 6.2 million kilometers (3.9 million miles) to capture this remarkable view of the moon in orbit about Earth. The moon is in the foreground; its orbital path is from left to right. Brightly colored Earth contrasts strongly with the moon, which reacts only about one-third as much sunlight as our world. # 
JPL / NASA 


  
  
  
 








This distorted view of a full moon seen through the Earth's atmosphere was photographed by an Expedition 14 crew member aboard the International Space Station on December 4, 2006. Visible at bottom center, the Jade Dragon Snow Mountain massif in southwestern China. # 
JSC / NASA 


  
  
  
 








In the lower left portion of this image, the Earth can be seen, as well as the much smaller moon to Earth's right, on May 6, 2010. When the MESSENGER spacecraft took this image, a distance of 183 million kilometers (114 million miles) separated the spacecraft and Earth. To provide context for this distance, the average separation between the Earth and the Sun is about 150 million kilometers (93 million miles). Though it is a beautiful, thought-provoking picture, viewing our planet from far away was not the main reason that the mission team planned the collection of this image. Instead, this image was acquired as part of MESSENGER's campaign to search for vulcanoids, small rocky objects that have been postulated to exist in orbits between Mercury and the Sun. # 
NASA / Johns Hopkins University Applied Physics Laboratory / Carnegie Institution of Washington 


  
  
  
 








On September 13, 2015, as NASA's Solar Dynamics Observatory, or SDO, kept up its constant watch on the sun, its view was photobombed not once, but twice. Just as the moon came into SDO's field of view on a path to cross the sun, Earth entered the picture, blocking SDO's view completely. When SDO's orbit finally emerged from behind Earth, the moon was just completing its journey across the sun's face. Earth's outline looks fuzzy, while the moon's is crystal-clear. This is because-while the planet itself completely blocks the sun's light-Earth's atmosphere is an incomplete barrier, blocking different amounts of light at different altitudes. # 
NASA / GSFC / Solar Dynamics Observatory 


  
  
  
 








Crowds look on as the super moon rises behind the Fremantle War Memorial at Monument Hill on November 14, 2016 in Fremantle, Australia. # 
Paul Kane / Getty 


  
  
  
 








Texas at night. This wide-angle, nighttime image was taken by astronauts looking from the International Space Station out southeastward over the Gulf of Mexico on February 11, 2015. Moonlight reflects diffusely off the waters of the gulf (image center left) making the largest illuminated area in the image. The sharp edge of light patterns of coastal cities trace out the long curve of the gulf shoreline¡ªfrom New Orleans at the mouth of the Mississippi River, to Houston (both image left), to Brownsville (image center) in the westernmost gulf. # 
NASA 


  
  
  
 








The Apollo 11 Lunar Module (LM) ascent stage, with astronauts Neil A. Armstrong and Edwin E. Aldrin Jr. onboard, is photographed from the Command and Services Modules (CSM) in lunar orbit on July 21, 1969. This view is looking west with the Earth rising above the lunar horizon. # 
NASA 


  
  
  
 








Eclipsed by the silhouetted horizon of the moon, the crescent Earth appears in the shape of a pair of horns in this unusual Apollo 17 photograph made on December 19, 1972. The three astronauts--Eugene A. Cernan, Ronald E. Evans and Harrison H. Schmitt--were just about to begin their journey homeward following the successful lunar landing phase of their mission. # 
JSC / NASA 


  
  
  
 








The moon, viewed from the International Space Station, over a cloudy western Pacific Ocean, on August 5, 2003. # 
NASA 


  
  
  
 








This picture of a crescent-shaped Earth and Moon was recorded on September 18, 1977, by NASA's Voyager 1 when it was 7.25 million miles (11.66 million kilometers) from Earth. The moon is at the top of the picture and beyond the Earth as viewed by Voyager. In the picture are eastern Asia, the western Pacific Ocean and part of the Arctic. Voyager 1 was directly above Mt. Everest (on the night side of the planet at 25 degrees north latitude) when the picture was taken. # 

JPL / NASA 



  
  
  
 








An Indian man rides a horse past people watching the 'supermoon' rise at Marina Beach in Chennai on November 14, 2016. # 

Arun Sankar / AFP / Getty 


  
  
  
 








Backdropped by the blackness of space and Earth's horizon, the Harmony node in Space Shuttle Discovery's payload bay, vertical stabilizer and orbital maneuvering system pods are featured in this image photographed by a STS-120 crewmember on October 24, 2007. Earth's moon is also visible at center. # 

JSC / NASA 


  
  
  
 








This July 1969 view from the Apollo 11 spacecraft shows the Earth rising above the moon's horizon. The lunar terrain pictured is in the area of Smyth's Sea on the nearside. # 

NASA


----------



## ae1905




----------



## ae1905

*Dog Family Tree Reveals Hidden History of Canine Diversity*

scientificamerican.com 

Erin Ross,Nature magazine

[HR][/HR] A new family tree of dogs containing more than 160 breeds reveals the hidden history of man’s best friend, and even shows how studying canine genomes might help with research into human disease.

In a study published on April 25 in _Cell Reports_, scientists examined the genomes of 1,346 dogs to create one of the most diverse maps produced so far tracing the relationship between breeds. The map shows the types of dog that people crossed to create modern breeds and reveals that canines bred to perform similar functions, such as working and herding dogs, don't necessarily share the same origins. The analysis even hints at an ancient type of dog that could have come over to the Americas with people thousands of years before Christopher Columbus arrived in the New World.

The new work could come as a surprise to owners and breeders who are familiar with how dogs are grouped into categories. “You would think that all working dogs or all herding dogs are related, but that isn’t the case,” says Heidi Parker, a biologist at the US National Institutes of Health (NIH) in Bethesda, Maryland, and a study author.

When geneticists tried to map out herding-dog lineages in the past, they couldn’t do so accurately. Parker and Elaine Ostrander, also a biologist at the NIH and a study author, say that this was because herding dogs emerged through selective breeding at multiple times and in many different places.

“In retrospect, that makes sense,” says Ostrander. “What qualities you’d want in a dog that herds bison are different from mountain goats, which are different from sheep, and so on.”
Most of the breeds in the study arose from dog groups that originated in Europe and Asia. But domestic dogs came to the Americas thousands of years ago, when people crossed the Bering land bridge linking Alaska and Siberia. These New World dogs later disappeared when European and Asian dogs arrived in the Americas. Researchers have looked for the genetic legacy of these ancient canines in the DNA of modern American breeds, but have found little evidence until now.

The way that two South American breeds, the Peruvian hairless dog and the xoloitzcuintli, clustered together on the family tree suggested to Ostrander and Parker that those animals could share genes not found in any of the other breeds in their analysis. Parker thinks that those genes could have come from dogs that were present in the Americas before Columbus’s arrival.

“I think our view of the formation of modern dog breeds has historically been one-dimensional,” says Bob Wayne, an evolutionary biologist at the University of California, Los Angeles. “We didn’t consider that the process has a deep historical legacy.”
That extends to what was probably the first period of domestication for canines in hunter-gatherer times. Ostrander and Parker think that dog breeds underwent two major periods of diversification. Thousands of years ago, dogs were selected for their skills, whereas a few hundred years ago, the animals were bred for physical traits.

“You would never be able to find something like this with cows or cats,” says Wayne, “We haven’t done this kind of intense deliberate breeding with anything but dogs.”

Although the latest study can help researchers to better understand the history of the domestic dog, there are several practical reasons for creating a database such as that produced by Ostrander, Parker and their colleagues. One reason is that it can help in diagnosing illnesses in domestic dogs. Another is that it can aid the study of human diseases.

Dogs and people can suffer from similar conditions, such as epilepsy. In humans, there might be hundreds of genes that can influence that illness. However, because dog breeds are relatively genetically isolated, each breed might carry only one or two of the genes involved in epilepsy, says Ostrander. “By studying dogs, we can we look at each [gene] individually. It’s much more efficient.”

_This article is reproduced with permission and was first published on April 25, 2017._


----------



## ae1905

*Rising Sea Levels Will Hit California Harder Than Other Places*

scientificamerican.com 

Anne C. Mulkern,E&E News

[HR][/HR] Melting ice sheets in Antarctica will wallop California with greater sea-level rise than the world average, threatening the state's iconic beaches and important infrastructure, according to a report issued yesterday.

The latest science shows that the rate of ice loss from Greenland and Antarctica is increasing. That soon will become the primary contributor to global sea-level rise, overtaking ocean expansion from warming waters and the melting of mountain glaciers and ice caps, said the study, submitted to the California Ocean Protection Council.

That ice loss causes higher sea-level rise in California, it said, due to how the Earth rotates and gravitational pull on the waters. If the ice melt is from West Antarctica, impacts extend further.

“For California, there is no worse place for land ice to be lost than from the West Antarctic Ice Sheet,” the study said. “For every foot of global sea-level rise caused by the loss of ice on West Antarctica, sea-level will rise approximately 1.25 feet along the California coast.”
Melting in Antarctica puts the California coast essentially “in the bull's-eye” of the magnified sea-level rise, said Dan Cayan, director of the Climate Research Division at the Scripps Institution of Oceanography in San Diego.

Cayan was one of seven authors of the report, produced by a working group of the California Ocean Protection Council Science Advisory Team. The council asked for an update on a similar report done in 2010 and updated in 2013. Another one was needed because of newer science and projections on sea-level rise, said John Laird, council chairman and California Natural Resources Agency secretary.

The council will hold workshops this spring and summer on the research, take comments and issue a draft proposal for how to turn it into policy this fall. That could be approved by January.

Sea-level rise already is affecting coastal California, the study said. It's causing more extensive coastal flooding during storms, periodic tidal flooding and increased coastal erosion. Over the short term, the state faces higher sea levels from phenomena like El Niño, during which the central Pacific Ocean warms. El Niño also can trigger stronger storms, which, when combined with sea-level rise, can trigger mudslides, floods and avalanches in the mountains.
Rising seas will worsen, although exact amounts depend on a number of factors, including whether countries successfully curb greenhouse gas emissions and limit temperature rise, it said.

Until 2050, there are minor differences in sea-level rise projections based on greenhouse gas pollution scenarios. They diverge significantly past midcentury, the study said. It gave possible sea-level rise amounts looking at three California locations where there are tide gauges: Crescent City in northern California, the Golden Gate Bridge in San Francisco and La Jolla in San Diego.

By the turn of the century in San Francisco under the lowest estimate, the sea would rise 1 foot. It could climb as much as 6.9 feet. In La Jolla, the ocean would lift 1.1 feet under the lowest estimate or as much as 7.1 feet.

Crescent City faces a range of 1.2 inches under the lowest estimate to as much as 5.9 feet.
The report also said those estimates might fall far below what actually transpires, however. They might underestimate “the likelihood of extreme sea-level rise,” particularly under high greenhouse gas emissions scenarios. It describes one that could bring a 10-foot rise to California by the turn of the century.

“The probability of this scenario is currently unknown, but its consideration is important, particularly for high-stakes, long-term decisions,” the report said.

Some speakers at the meeting, including a California Coastal Commission representative, expressed concern that the low estimates in the report could mislead some people.

“The probability information as currently presented could have the unintended consequence of reducing or reversing the progress that is being made in planning for sea-level rise,” said Madeline Cavalieri, coastal planner at the California Coastal Commission. The low scenarios don't account for the latest science on ice sheet loss, she said.

Some people “may use the tables to select sea-level rise projections that are lower than what is recommended by the current state guidance,” she said.

Cavalieri praised the report for including the extreme scenario with the possible 10-feet scenario, saying it is important for planning to protect critical infrastructure. Coastal Commission staff members currently are working on adaptation blueprints with more than 30 cities out of 76 local governments located in the sea-level rise zone, she said.

David Behar, climate program director at the San Francisco Public Utilities Commission, cautioned that the state needs to be “more transparent and clear about where 10-foot number comes from.”

“That's the biggest number that I've seen in any of the reports, including in the NOAA report ... so we need to understand where that number came from and help people understand how to use it, ” he said. His agency likely will increase rates to fund adaptation, he said, so clarity is essential.
Gavin Newsom, California lieutenant governor and chairman of the State Lands Commission, at the meeting expressed frustration with how to deal with the study results.

“This is all interesting, but what the hell do we do about it,” outside of the state maintaining its actions to cut greenhouse gas emissions and limit climate change, Newsom said.

There are planning issues the state needs to think about, not just on the coast but in the Sacramento-San Joaquin River Delta, the system that provides a large share of the state's drinking water, Cayan said. The delta has leveed islands. If those are overtopped, he said, the circulation and the salinity gradient would change.

California also is considering a massive project installing tunnels beneath the delta to carry water to Southern California instead of the current pumping system. The state might need to carefully study the design and the height of the pipes that would draw the water, Cayan said, to make sure they are high enough.

“California really has to be a leader because this is such an important problem, both from direct impacts but geopolitically, with a lot of populations that are in harm's way,” Cayan said.
There's also the issue of social justice, he said, as “some of the folks that are least able to build a sea wall, or whatever it is, are maybe more exposed.”

Reprinted from Climatewire with permission from E&E News. E&E provides daily coverage of essential energy and environmental news at www.eenews.net.


----------



## ae1905

*The first true-color images of Saturn taken during Cassini's close encounter*

blogs.discovermagazine.com


[HR][/HR] A processed, true-color image of Saturn’s polar vortex based on photos taken by Cassini on April 26, 2017 during the spacecraft’s first dive between the planet and its rings. (Source: NASA/JPL-Caltech/SSI/Sophia Nasr)

We’ve already been treated to spectacular black and white closeup images of Saturn, beamed home to Earth by the Cassini spacecraft after it dove between the planet and its rings. Now, we’re getting to see what things look like in true color.

Among the first of these images is the one above, processed by Sophia Nasr, an astro-particle physicist working on dark matter. She will begin her PhD studies in physics at UC Irvine in September 2017. (For her full bio, see the end of this post.) I first spotted Nasr’s image on Twitter, where she may be posting more. You can find her here: https://twitter.com/Pharaoness

*SEE ALSO: Cassini shoots through the gap between Saturn and its rings, returning the closest views ever of the planet
*
That striking, sky-blue feature is the eye of a persistent hurricane at Saturn’s north pole. The feature is 1,200 miles across, about 20 times larger than the average hurricane eye on Earth. And clouds are swirling around it as fast as 330 miles per hour.
The striking cerulean color is not at all false. It comes from scattering of sunlight, the same phenomenon that produces a blue sky here on Earth.

To produce the image, Nasr used Photoshop to combine three photographs taken using blue, green and red filters. With a little additional tweaking of contrast and other factors, Nasr produced something akin to what the scene would look like to our eyes if we were hitching a ride on Cassini.

Just as an aside, I actually dreamed last night that I was doing just that. It was quite the wild ride (considering that we were moving at 77,000 miles per hour — Cassini’s actual speed as it zoomed between Saturn and its innermost ring). But in my dream, the eye was blood red, probably because I had seen false-color images of it before (shot in near-infrared wavelengths). So when I woke up and found Nasr’s image on Twitter, I was amazed.
Here’s an animation of the images Nasr used — red, green, blue — and concluding with the natural color result:
Images: NASA/JPL-Caltech/SSI/Sophia Nasr. Animation: Tom Yulsman

Each filter tends to bring out different features in the clouds and gas swirling around the eye. To learn how to do this kind of processing in Photoshop, see this tutorial at the Planetary Society.

Sophia Nasr isn’t the only person to produce a true-color view of the hurricane eye. Here’s another one, created by Jason Major, a graphic designer and space buff:

Processed color composite of Saturn's north polar vortex from @CassiniSaturn's pass on April 26, 2017 pic.twitter.com/58pyMjKx42
— Jason Major (@JPMajor) April 28, 2017

​Outstanding!

The hurricane eye on Saturn is part of a much bigger feature, called the Hexagon.
Source: NASA/JPL-Caltech/SSI

This false color animation is also from the Cassini spacecraft. When it was first produced by NASA in 2013, it was the highest-resolution view of the feature up until that point. You can see the eye of the storm swirling at the center.
And here’s a broader view:
Saturn’s north pole Hexagon. (Source: NASA)

The Hexagon is produced by a six-sided jet stream pattern at Saturn’s north pole. It has been observed ever since the Voyager 1 and 2 spacecraft passed by in 1980 and 1981. For a detailed explanation of what sustains it, see this post by Emily Lakdawalla at the Planetary Society (who also did the Photoshop tutorial).

In coming days I’m sure we’ll be treated to yet more beautiful, color imagery acquired by Cassini during its daring, swooping dive. And remember this is just the first of 22 dives, part of what NASA is calling the “Grand Finale.”

This is the final chapter in Cassini’s epic journey. The spacecraft will loop around Saturn approximately once per week, plunging 22 times between the rings and the planet. And then on September 15, 2017, the spacecraft is scheduled to dive into Saturn’s atmosphere, putting an end to the mission.

To conclude, here’s that full bio Sophia Nasr sent me:

Sophia is an astro-particle physicist who researches dark matter, and will begin her PhD studies in physics at UC Irvine in September 2017. While astrophysics, particle physics, and cosmology are her passions, she also revels in planetary science and space missions that uncover mysteries of the planets in our solar system. Sophia is heavily involved in scientific outreach, using her social media reach to not only help people learn about science, but learn to love it.
​I can say that I love it even more after seeing her imagery! And I’m looking forward to more.


----------



## ae1905

*Cancer-causing DNA found in stem cells used in some clinical trials*

statnews.com By Ike Swetlitz
[HR][/HR] Some human stem cells growing in labs that researchers have used in experiments to treat serious diseases contain serious cancer-causing mutations, scientists reported on Wednesday. The discovery raised alarms that patients could be treated for one disease, such as macular degeneration, only to develop another, cancer.

Harvard scientists obtained samples of most of the human embryonic stem cell lines registered with the National Institutes of Health for use in both basic research and in developing therapies for patients with diseases including diabetes, Parkinson’s, and macular degeneration. They found that five of the 140 lines had cells with a cancer-causing mutation.

At least two of the five lines have been used in experimental treatments tested in clinical trials in an unknown number of patients. None is known to have developed cancer.
You’re probably asking:

*It’s been 18 years since scientists got stem cells from human embryos, launching the “regenerative medicine” revolution. No one noticed this before?*

Actually, a 2011 study found the same cancer-causing mutation, in a gene called TP53. But the study examined a single embryonic stem cell line, said biologist Jeanne Loring of the Scripps Research Institute, who led that research.
 
*Read More*

Three patients blinded by stem cell procedure, physicians say 

 

*Wait, what’s a cell line?*

You start with cells from a days-old human embryo, a hollow ball of 200 or so cells. You remove the stem cells, which are genetically identical, and grow them in lab dishes. The cells divide and proliferate. All of these progeny constitute a cell line. In 2009, President Obama approved the use of federal funds to create such lines from embryos that were going to be discarded by fertility clinics or were donated by couples and met ethical criteria.

*How did the new study go beyond the earlier one?*

Scientists led by Kevin Eggan and Steven McCarroll of Harvard University zeroed in on the 182 supposedly healthy human embryonic stem cell lines that meet Obama’s criteria and were registered with the NIH, they reported in Nature online. They obtained those they could (the NIH registry is just a list; you have to get the actual cells from the labs that made and own them) and did DNA sequencing on 114. They also did DNA sequencing on another 26 lines that had been prepared for human experiments. Of these 140, five had cancer-causing mutations in the TP53 gene.

*Is that dangerous to people who receive the cells?*

Patients do not receive embryonic stem cells; they get cells that those stem cells turn into, like pancreas cells or neurons or heart cells. The problem, Eggan said, is that as stem cells grow in lab dishes “they have a propensity to acquire the same kind of genetic mutations found in human cancers. The final type of cell — liver, lung, pancreas, and anything else — will inherit the mutations, conferring a very real risk” of causing cancer in the patient who received the cells.

*Five cell lines out of 140 doesn’t sound so bad.*

You might think otherwise if you received cells from any of the five. In fact, some cell lines have been used more than others, so five out of 140 might understate the risk. Two of the lines most widely used in research, called H1 and H9, both have the cancer-causing mutations. H1 was used in a famous clinical trial by the biotech company Geron for spinal cord injury. That study was abandoned in 2011, after five patients received cells, but picked up in 2014 by Asterias Biotherapeutics. H9 is the source of cells in a clinical trial for macular degeneration. Other stem cell lines with TP53 mutations are in line for use in other trials. No one knows how many patients have received cells from lines with TP53 mutations.

*Is anyone keeping an eye on the patients to see if they develop cancer?*

A spokesman for Asterias told STAT that Geron and now Asterias have followed the original five patients and have seen “no evidence” of tumor formation. Asterias is giving all participants in the ongoing trial frequent MRIs to look for tumors. WiCell, a nonprofit associated with the University of Wisconsin that owns and supplies the H1 line, said it was unaware of the new findings. “We always want to do what is best for the research community,” said Robert Drape, executive director of WiCell. “Once we have an opportunity to review [the] publication, we will consult leading researchers in the field and determine the appropriate next steps.”

*Shouldn’t the Harvard scientists have sounded an alarm sooner?*

They started seeing cancer-causing mutations in stem cells about 10 months ago, McCarroll said, “and we shared the results ahead of publication,” including telling stem cell scientists about the problem at a meeting last fall. Scientists who control some of the lines have begun their own DNA testing, he said.

*What do other experts think?*

Scripps’s Loring said there was no reason “to say the sky is falling.” There are ways to ensure cells are healthy before they’re implanted in patients. But NIH cancer geneticist Dr. Stephen Chanock suggested that TP53 mutations might be just the tip of the iceberg: “We cannot rule out the possibility of additional, less frequent acquired [mutations] in other cancer genes,” he wrote in Nature in a commentary.
 
*Read More*

Drive to get more patients experimental stem cell treatments stirs concern 

 *

What about the type of stem cells that more and more scientists are using instead of embryonic ones?*

Those are called induced pluripotent stem cells; they come from the cells of already-born people. Unfortunately, any such cells that grow in the lab long enough can accumulate cancer-causing mutations, Loring said. Perversely, cells that do acquire cancer mutations survive better than cells that don’t.

*What’s the solution?*

Neither the Harvard scientists, nor Loring nor Chanock, believe the discovery of cancer-causing mutations in stem cells should derail stem cell therapies. But the Food and Drug Administration does not require researchers to sequence the DNA of cells before putting them into people, mandating only testing for abnormal chromosomes. That’s a mistake, Loring said. 

“We need to use the tools we have to make sure we don’t screw up somebody we’re trying to cure, by giving them cancer.” In her own research testing iPS cells as a treatment for Parkinson’s disease, “we are doing tons of quality control to be sure nothing bad slips into people,” she said. “You have to check your cells even though the FDA does not require it.”
DNA sequencing to catch cancer-causing mutations in stem cells costs about $1,000 per genome. Regulators in both Europe and the US are considering making that mandatory, said Pete Coffey of University College London, who is studying the use of stem cells to treat eye diseases. Although a 5-in-140 risk may seem small, “regulators are going to ask [researchers who propose clinical trials] what are you going to do _when_ it goes wrong, not if.”

*If stem cells can develop cancer-causing mutations before they’re put into people, can they also do so after?*

McCarroll and others think not.“There is something very different about the environment of cells growing in a lab dish versus the body,” he said. But suddenly discovering cancer-causing mutations in cell lines that have been around for nearly two decades is nevertheless enough of a surprise to “underscore the need for regenerative medicine to proceed with care,” Eggan said.

Sharon Begley can be reached at [email protected] 

Follow Sharon on Twitter @sxbegle


----------



## ae1905




----------



## ae1905

*CRISPR Eliminates HIV in Live Animals*

May 2, 2017 



 
 Click Image To Enlarge + 







 New research reveals that HIV DNA can be excised from the genomes of living animals to eliminate further infection. [NIH] 
 Due to their innate nature to hide away and remain latent for extended periods of time, HIV infections have proven notoriously difficult to eliminate. Yet now, new data released from a research team led by investigators at the Lewis Katz School of Medicine at Temple University (LKSOM) and the University of Pittsburgh shows that HIV DNA can be excised from the genomes of living animals to eliminate further infection. Additionally, the researchers are the first to perform this feat in three different animal models, including a "humanized" model in which mice were transplanted with human immune cells and infected with the virus. Findings from the new study were published recently in Molecular Therapy in an article entitled “In Vivo Excision of HIV-1 Provirus by saCas9 and Multiplex Single-Guide RNAs in Animal Models.” 
This is the first study to demonstrate that HIV-1 replication can be completely shut down and the virus eliminated from infected cells in animals with a powerful gene-editing technology known as CRISPR/Cas9. The new work builds on a previous proof-of-concept study that the team published in 2016, in which they used transgenic rat and mouse models with HIV-1 DNA incorporated into the genome of every tissue of the animals' bodies. They demonstrated that their strategy could delete the targeted fragments of HIV-1 from the genome in most tissues in the experimental animals.
"Our new study is more comprehensive," noted co-senior study author Wenhui Hu, M.D., Ph.D., associate professor in the Center for Metabolic Disease Research and the department of pathology at LKSOM. "We confirmed the data from our previous work and improved the efficiency of our gene-editing strategy. We also show that the strategy is effective in two additional mouse models, one representing acute infection in mouse cells and the other representing chronic, or latent, infection in human cells."
In this new study, the LKSOM team genetically inactivated HIV-1 in transgenic mice, reducing the RNA expression of viral genes by roughly 60% to 95%—confirming their earlier findings. They then tested their system in mice acutely infected with EcoHIV, the mouse equivalent of human HIV-1. 
 
 Click Image To Enlarge + 







 Methodology used by the investigators in the current study. [Yin et al., Molecular Therapy, 2017] 
 "During acute infection, HIV actively replicates," explained co-senior study investigator Kamel Khalili, Ph.D., professor and chair of the department of neuroscience at LKSOM. "With EcoHIV mice, we were able to investigate the ability of the CRISPR/Cas9 strategy to block viral replication and potentially prevent systemic infection." The excision efficiency of their strategy reached 96% in EcoHIV mice, providing the first evidence for HIV-1 eradication by prophylactic treatment with a CRISPR/Cas9 system.
In the third animal model, a latent HIV-1 infection was recapitulated in humanized mice engrafted with human immune cells, including T cells, followed by HIV-1 infection. "These animals carry latent HIV in the genomes of human T cells, where the virus can escape detection, Dr. Hu explained. Amazingly, after a single treatment with CRISPR/Cas9, viral fragments were successfully excised from latently infected human cells embedded in mouse tissues and organs.
In all three animal models, the researchers employed a recombinant adeno-associated viral (rAAV) vector delivery system based on a subtype known as AAV-DJ/8. "The AAV-DJ/8 subtype combines multiple serotypes, giving us a broader range of cell targets for the delivery of our CRISPR/Cas9 system," remarked Dr. Hu. Additionally, the researchers re-engineered their previous gene-editing apparatus to now carry a set of four guide RNAs, all designed to efficiently excise integrated HIV-1 DNA from the host cell genome and avoid potential HIV-1 mutational escape.
To determine the success of the strategy, the team measured levels of HIV-1 RNA and used a novel and cleverly designed live bioluminescence imaging system. "The imaging system, developed by Dr. Won-Bin Young while at the University of Pittsburgh, pinpoints the spatial and temporal location of HIV-1-infected cells in the body, allowing us to observe HIV-1 replication in real time and to essentially see HIV-1 reservoirs in latently infected cells and tissues," stated Dr. Khalili.
The researchers were excited by their findings and are optimistic about their next steps. “The next stage would be to repeat the study in primates, a more suitable animal model where HIV infection induces disease, in order to further demonstrate the elimination of HIV-1 DNA in latently infected T cells and other sanctuary sites for HIV-1, including brain cells," Dr. Khalili concluded. "Our eventual goal is a clinical trial in human patients."


----------



## ae1905

*How going to space affects male and female astronauts differently*

businessinsider.com 
Skye Gould

[HR][/HR] The effects of zero gravity on the human body are taxing, regardless of gender, and NASA has found no differences between men and women in terms of their psychological and behavioral responses to space flight. 

But sex and gender do seem to have a role in how being in space for long periods of time affects astronauts' bodies. 

In June 2013, NASA, along with the National Space Biomedical Research Institute (NSBRI), released a report after studying the cardiovascular, immunological, sensorimotor, musculoskeletal, reproductive and behavioral  effects of space on men and women. 

Because of an imbalance in the data available at the time — 477 men vs. 57 women — it's difficult to come to solid conclusions based solely on gender, but the research found some intriguing possibilities. We created an infographic (based on NASA's own) that highlights some of the differences NASA found between men and women in terms of the effects (and potential effects, based on physiology on Earth) of long-term space flight: 
   Skye Gould/Business Insider


----------



## ae1905

*Is Technology Too Good for an Old-School Test of Einstein’s Relativity?*

blogs.discovermagazine.com

[HR][/HR] July 11, 2010 eclipse Image as viewed from Easter Island in the South Pacific. _(Credits: Williams College Eclipse Expedition – Jay M. Pasachoff, Muzhou Lu, and Craig Malamut)_

On Aug. 21, sky-gazers from around the world will converge in the United States as a total solar eclipse charts a path from Oregon to South Carolina. In between, on Casper Mountain in Wyoming, you’ll find Don Bruns with his telescope.

A retired physicist, Bruns is using the rare opportunity to test Albert Einstein’s general relativity like Sir Arthur Eddington, who was the first scientist to test the theory back in 1919. At that time, Newton’s law of universal gravity was still vogue, but Einstein shook the status quo by introducing his theory of general relativity, which fused concepts of time and three-dimensional space into a four-dimensional continuum called space-time. According to Einstein, gravity wasn’t a force; instead, it was a distortion in the fabric of space-time.

*The First Test*

His theory was just four years old when Eddington put it to the test during an eclipse in 1919. Both Einstein’s and Newton’s theories indicated that light from distant objects would warp, or bend, as it passed through gravitational fields of massive objects, such as the sun. Einstein’s theory indicated the sun would bend light from a star by a miniscule 1.75 arcseconds—just 0.000486 of a degree—while Newton’s gravity predicated starlight would bend by half that amount. To see who was correct, you’d just need to compare the apparent positions of stars at night to their position during the day, when their light passed through the sun’s gravitational field.

An eclipse was the perfect time to perform such a test. In January of 1919, Eddington photographed the real positions of stars in the Hyades cluster, because light from these stars passes through the sun’s gravitational field. But to perform the second half of the experiment, one needs to photograph the same stars during the day, but in darkness. In May, he photographed the same group during an eclipse, when stars in the Hyades remained visible. By laying an image of real star positions over images of their positions during the eclipse, Eddington could measure the apparent shift caused by the sun’s gravity.

When Eddington compared the two images, a shift was clearly visible. And based on his calculations, Eddington concluded that the theory of relativity more accurately predicted the amount light bent than Newton’s.
One of Eddington’s photo plates from his 1919 experiment. The shift in starlight is marked with faint lines in the image. _(Credit: Wikimedia Commons)_

Just like when scientists at the Laser Interferometer Gravitational-Wave Observatory confirmed the first detection of a gravitational wave in 2015, Eddington’s discovery was front-page news. He became a star. However, upon further inspection, the results left much to be desired for many scientists. Eddington used silver nitrate photographic plates to capture images of the sky, and the image resolution wasn’t precise. Further, one of the three photographic plates yielded a measurement that would have confirmed Newton’s theory of gravity, but it was removed from the experiment due to a “systemic error.”

Eddington was also a known pacifist and some scientists believed his experiment was wrought with confirmation bias. Following World War I, relations between scientists in Germany and the United Kingdom were frayed to say the least. Some speculated that Eddington, a Briton, disproved compatriot Newton in favor of Einstein, a German, in an act of international scientific diplomacy to cure old wounds.
Even Stephen Hawking had doubts about Eddington’s results, as he wrote in his _Brief History of Time_. Still, other physicists argue that if presented with Eddington’s data today, many physicists would come to the same conclusion.
*Another Test*

Thus, we return to citizen scientist Bruns who, in a sense, will try to vindicate Eddington and prove his findings stand up. But nearly 100 years later, Bruns’ attempt may be foiled by technology that’s actually too good.

Modern photography has long since moved beyond plates, but in this case, the march of technological progress doesn’t necessarily yield better results. To get clearer resolution, digital cameras use pixels that are too large, even on a micro level, to catch the apparent position shift that proves relativity.

Eddington’s original experiment showed that the sun’s starlight bends apparent position by a mere 1.75 arcseconds. When light falls on the line between two pixels, imaging software nudges it into a neighboring pixel. So if a star’s centroid falls on the line, the picture will reveal variations that are bigger or smaller than what really exists.
Bruns’ setup for eclipse day. _(Courtesy: Don Bruns)_

“It’s the most challenging experiment I’ve done in my career,” he says. In 1973, the last time a relativity experiment was conducted during a total solar eclipse, technology “was considered ancient. I mean glass plates, no computers, it’s really a different era. So I thought in 2017…it’d be a piece of cake, not a problem. Well, turns out it’s still pretty tough.”

Bruns claims that there’s “no scientific value” to what he’s doing; he says he’s just testing relativity for fun. In February 2016, you’ll remember, scientists announced the first gravitational wave detection, confirming Einstein was correct all along. Still, like the number-hunters who calculate pi to an ever-increasing number of digits, sometimes the act of science is a reward in itself.

Alex King, chair of physics and astronomy at Austin Peay State University, would disagree with Bruns. King believes there’s certainly a compelling scientific reason to conduct an old-school experiment. The resolution flaws in Eddington’s photos were reason enough for APSU to seek funding to conduct the same experiment this summer. But APSU’s efforts were blocked by what King calls the politicization of “funding for basic science”—they couldn’t get the money.

Fortunately, Bruns has the experience to pull it off on ecplise day. In 1992, Bruns founded a company called Stellar Products that made standard adaptive optics systems. Bruns left Stellar in 2014, but that experience gave him deep knowledge about equipment options that would work well. When he chose the Finger Lakes ML8051 camera, the manufacturer loaned him one for free. He’ll then mount the camera on a TeleVue NP101 telescope to optimize focal ratio. Unfortunately, so long as it relies on pixels to make a picture, there’s going to be some margin of error. After a few trial runs, though, Bruns thinks he’s trimmed the margin to roughly 1 percent. He says other citizen scientists who’ve attempted the experiment only managed to narrow their margins of error to 10 percent.

The camera Bruns opted for is monochrome—it shoots in black and white. Digital color photography works like pointillism; instead of painting a solid image, it captures the picture in tiny dots. According to Bruns, when a digital camera shoots in color every other pixel has a different response to light, which affects accuracy.

“They used to have these very long, 15-, 20-foot long telescopes operating at a very small focal ratio,” he says, referring to Eddington and other scientists in Einstein’s era. “But now we have a very short compact telescope. It’s only like 2 feet long, but optical distortion is now a problem.”

Shorter, modern telescopes correct coma and optical aberrations, but the error they cause in the distortion is bigger than the error from Einstein’s deflections, Bruns says. In other words, no matter what, Bruns is making trade-offs.

*Just for Fun*

In Bruns’ case, one of the largest trade-offs is his time. By the time he spoke with _Discover_, he’d already spent around 2,000 hours solving for distortion alone. That’s out of nearly 4,000 project hours total, the rest of which he devoted to sorting through star charts for base comparison data, writing software to better process his data and addressing every other part of the experiment that could affect margin of error. He even studied how to minimize the impact of breeze and atmospheric turbulence, which Burns says can spread starlight across three to four pixels.

During totality, Bruns plans to take calibration images, something that scientists of yore decided not to do. Ask why and he’ll tell you about Erwin Finlay-Freundlich, a physicist who “was a little frustrated [that] after 3 or 4 eclipses…no one did the experiment quite right.”
“You need to take pictures of the stars during the eclipse but you also need to take calibration images during the eclipse,” says Bruns citing Finlay-Freundlich’s method. “And a lot of people of course didn’t want to do that because the eclipses only last 3 to 5 minutes, and the exposures each took a minute or two, so they didn’t want to waste time during totality taking calibration images.”

At its point of greatest duration, the 2017 eclipse will last 2 minutes, 40.2 seconds. Fortunately, digital photography can get the required exposure in milliseconds. This means Bruns can test relativity Finlay-Freundlich’s way. And when he does, he says he’ll be the first person in history to do it from the ground.

“Just for fun” indeed.


----------



## dasos

better than anything id see on jimmy neutron


----------



## ae1905

scientificamerican.com

 Agata Blaszczak-Boxe

Certain individuals are seriously impaired when it comes to recognizing individuals of another color









_Credit: Getty Images_ 

We tend to be worse at telling apart faces of other races than those of our own race, studies have found. Now research shows some people are completely blind to features that make other-race faces distinct. Such an impairment could have important implications for eyewitness testimony in situations involving other-race suspects.

The ability to distinguish among members of one's own race varies wildly: some people can tell strangers apart effortlessly, whereas others cannot even recognize the faces of their own family and friends (a condition known as prosopagnosia). Psychologist Lulu Wan of the Australian National University and her colleagues wanted to quantify the distribution of abilities for recognizing other-race faces. They asked 268 Caucasians born and raised in Australia to memorize a series of six Asian faces and conducted the same experiment, involving Caucasian faces, with a group of 176 Asians born and raised in Asia who moved to Australia to attend university. In 72 trials, every participant was then shown sets of three faces and had to point to the one he or she had learned in the memorization task.

The authors found that 26 Caucasian and 10 Asian participants—8 percent of the collective study population—did so badly on the test that they met the criteria for clinical-level impairment. “We know that we are poor at recognizing other-race faces,” says Jim Tanaka, a professor of psychology at the University of Victoria in British Columbia, who was not involved in the research. “This study shows just how poor some people are.” Those individuals “would be completely useless in terms of their legal value as an eyewitness,” says study co-author Elinor McKone, a professor of psychology at the Australian National University. The world's legal systems do not, however, take into account individual differences in other-race face recognition, she notes.

One's lifetime level of exposure to other races could factor into a person's ability to recognize people of another color, according to the findings published in the January issue of the _Journal of Experimental Psychology: General_. Among 106 Asian participants born and raised in Australia, only about 3 percent were blind to Caucasian faces. In comparison, nearly 6 percent of the Asians born and raised in Asia had the impairment.

The effect extends to other races, too. In a study published in 2001 in _Psychology, Public Policy, and Law,_ black people recruited in South African shopping malls, who had average levels of interracial contact, were better at recognizing faces of their own race than of others.

This article was originally published with the title "Are You Blind to Faces of Other Races?"


----------



## ae1905

*'lost' forests covering an area two-thirds the size of Australia*

theconversation.com

Ben Sparrow

[HR][/HR] A new global analysis of the distribution of forests and woodlands has “found” 467 million hectares of previously unreported forest – an area equivalent to 60% of the size of Australia. 
The discovery increases the known amount of global forest cover by around 9%, and will significantly boost estimates of how much carbon is stored in plants worldwide.

The new forests were found by surveying “drylands” – so called because they receive much less water in precipitation than they lose through evaporation and plant transpiration. As we and our colleagues report today in the journal Science, these drylands contain 45% more forest than has been found in previous surveys. 

We found new dryland forest on all inhabited continents, but mainly in sub-Saharan Africa, around the Mediterranean, central India, coastal Australia, western South America, northeastern Brazil, northern Colombia and Venezuela, and northern parts of the boreal forests in Canada and Russia. In Africa, our study has doubled the amount of known dryland forest.
 The world’s drylands: forested areas shown in green; non-forested areas in yellow. Bastin et al., Science (2017) With current satellite imagery and mapping techniques, it might seem amazing that these forests have stayed hidden in plain sight for so long. But this type of forest was previously difficult to measure globally, because of the relatively low density of trees.
What’s more, previous surveys were based on older, low-resolution satellite images that did not include ground validation. In contrast, our study used higher-resolution satellite imagery available through Google Earth Engine – including images of more than 210,000 dryland sites – and used a simple visual interpretation of tree number and density. A sample of these sites were compared with field information to assess accuracy.

*Unique opportunity*

Given that drylands – which make up about 40% of Earth’s land surface – have more capacity to support trees and forest than we previously realised, we have a unique chance to combat climate change by conserving these previously unappreciated forests.

Drylands contain some of the most threatened, yet disregarded, ecosystems, many of which face pressure from climate change and human activity. Climate change will cause many of these regions to become hotter and even drier, while human expansion could degrade these landscapes yet further. Climate models suggest that dryland biomes could expand by 11-23% by the end of the this century, meaning they could cover more than half of Earth’s land surface.

Considering the potential of dryland forests to stave off desertification and to fight climate change by storing carbon, it will be crucial to keep monitoring the health of these forests, now that we know they are there.
 Ground-based observations were a crucial part of the survey. TERN AusPlots, Author provided 

*Climate policy boost*

The discovery will dramatically improve the accuracy of models used to calculate how much carbon is stored in Earth’s landscapes. This in turn will help calculate the carbon budgets by which countries can measure their progress towards the targets set out in the Kyoto Protocol and its successor, the Paris Agreement.

Our study increases the estimates of total global forest carbon stocks by anywhere between 15 gigatonnes and 158 gigatonnes of carbon – an increase of between 2% and 20%.

This study provides more accurate baseline information on the current status of carbon sinks, on which future carbon and climate modelling can be based. This will reduce errors for modelling of dryland regions worldwide. Our discovery also highlights the importance of conservation and forest growth in these areas. 
[HR][/HR]_The authors acknowledge the input of Jean-François Bastin and Mark Grant in the writing of this article. The research was carried out by researchers from 14 organisations around the world, as part of the UN Food and Agriculture Organization’s Global Forest Survey._


----------



## ae1905

*Pope Francis invites scientists to the Vatican after Catholic Church realises*

*the Big Bang is Real
*
Andrew Griffin


[HR][/HR] The Vatican has invited the world's leading scientists and cosmologists to try and understand the Big Bang.

Astrophysicists and other experts will attend the Vatican Observatory to discuss black holes, gravitational waves and space-time singularities as it honors the late Jesuit cosmologist considered one of the fathers of the idea that the universe began with a gigantic explosion.

The conference – which runs through the week – is part of an increasing admission by the church that scientific theories were real and not necessarily in contradiction with theological doctrine.

*Nasa's most stunning pictures of space*





Pope Francis declared in 2014 for instance that God is not "a magician with a magic wand" and that evolution and Big Bang theory are real.

The conference honours Jesuit priest, Monsignor George Lemaitre, and is being held at the Vatican Observatory. The observatory was founded by Pope Leo XIII in 1891 to help correct the notion that the Roman Catholic Church was hostile to science.

In 1927, Lemaitre was the first to explain that the receding of distant galaxies was the result of the expansion of the universe, a result he obtained by solving equations of Einstein's theory of general relativity. 

Lemaitre's theory was known as the "primeval atom," but it is more commonly known today as the big-bang theory. 

"He understood that looking backward in time, the universe should have been originally in a state of high energy density, compressed to a point like an original atom from which everything started," according to a press release from the Observatory. 

The head of the Vatican Observatory, Jesuit Brother Guy Consolmagno, says Lemaitre's research proves that you can believe in God and the big-bang theory. 

"Lemaitre himself was very careful to remind people — including Pope Pius XII — that the creative act of God is not something that happened 13.8 billion years ago. It's something that happens continually," Consolmagno said Monday. 

Believing merely that God created the big bang means "you've reduced God to a nature god, like Jupiter throwing lightning bolts. That's not the God that we as Christians believe in," he said. 

Christians, he said, believe in a supernatural God who is responsible for the existence of the universe, while "our science tells us how he did it."


----------



## ae1905

*Why you shouldn’t let your child play on your iPad*

businessinsider.comLaura Sanders, Science News








Shutterstock/CroMary 

One of the most pressing and perplexing questions parents have to answer is what to do about screen time for little ones. Even scientists and doctors are stumped. That’s because no one knows how digital media such as smartphones, iPads and other screens affect children. 

The American Academy of Pediatrics recently put out guidelines, but that advice was based on a frustratingly slim body of scientific evidence, as I’ve covered. Scientists are just scratching the surface of how screen time might influence growing bodies and minds. Two recent studies point out how hard these answers are to get. But the studies also hint that the answers might be important. 

In the first study, Julia Ma at the University of Toronto and colleagues found that, in children younger than 2, the more time spent with a handheld screen, such as a smartphone or tablet, the more likely the child was to show signs of a speech delay. Ma presented the work May 6 at the 2017 Pediatric Academic Societies Meeting in San Francisco. 

The team used information gleaned from nearly 900 children’s 18-month checkups. Parents answered a questionnaire about their child’s mobile media use and then filled out a checklist designed to identify heightened risk of speech problems. This checklist is a screening tool that picks up potential signs of trouble; it doesn’t offer a diagnosis of a language delay, points out study coauthor Catherine Birken, a pediatrician at The Hospital for Sick Children in Toronto. 

Going into the study, the researchers didn’t have expectations about how many of these toddlers were using handheld screens. “We had very little clues, because there is almost no literature on the topic,” Birken says. “There’s just really not a lot there.” 

It turns out that about 1 in 5 of the toddlers used handheld screens, and those kids had an average daily usage of about a half hour. Handheld screen time was associated with potential delays in expressive language, the team found. For every half hour of mobile media use, a child’s risk of language delay increased by about 50 percent. 

“The relationship is not that strong,” Birken says, and those numbers come with big variations. Still, a link exists. And finding that association means there’s a lot more work to do, Birken says. In this study, researchers looked only at time spent with handheld screens. Future studies could investigate whether parents watching along with a child, the type of content or even time of day might change the calculation. 

A different study, published April 13 in _Scientific Reports_, looked at handheld digital device use among young children and its relationship to sleep. As a group, kids from ages 6 months to 3 years who spent more time using mobile touch screen devices got less sleep at night. 

Parent surveys filled out online indicated that each hour of touch screen use was linked to 26.4 fewer minutes of night sleep and 10.8 minutes more sleep during the day. Extra napping time “may go some way to offset the disturbed nighttime sleep, but the total sleep time of high users is still less than low users,” says study coauthor Tim Smith, a cognitive psychologist at Birkbeck, University of London. Each additional hour of touch screen use is linked to about 15 minutes less sleep over 24 hours. 

By analyzing 20 independent studies, an earlier study found a similar link between portable screen use and less sleep among older children. The new results offer “a consistent message that the findings from older children translate into those younger,” says Ben Carter of King’s College London, who was a coauthor on the study of older children. 

So the numbers are in. Daily doses of _Daniel Tiger’s Neighborhood_ on a mobile device equals 7.5 minutes less sleep and a 50 percent greater risk of expressive language delay for your toddler, right? Well, no. It’s tempting to grab onto these numbers, but the science is too preliminary. In both cases, the results show that the two things go together, not that one caused the other. 

It may be a long time before scientists have answers about how digital technology affects children. In the meantime, you can follow the American Academy of Pediatrics’ recently updated guidelines, which discourage screens (except for video chatting) before 18 months of age and for all children during meals or in bedrooms. 

We now live in a world where smartphones are ever-present companions, a saturation that normalizes the sight of small screens in tiny hands. But I think we should give that new norm some extra scrutiny. The role of mobile devices in our kids’ lives — and our own — is something worth thinking about, hard. 

Read the original article on Science News. Copyright 2017. Follow Science News on Twitter.


----------



## ae1905

*A giant lava lamp inside the Earth might be flipping the planet's magnetic field*

theconversation.comPaula Koelemeijer

If you could travel back in time 41,000 years to the last ice age, your compass would point south instead of north. That’s because for a period of a few hundred years, the Earth’s magnetic field was reversed. These reversals have happpened repeatedly over the planet’s history, sometimes lasting hundreds of thousands of years. We know this from the way it affects the formation of magnetic minerals, that we can now study on the Earth’s surface.

Several ideas exist to explain why magnetic field reversals happen. One of these just became more plausible. My colleagues and I discovered that regions on top of the Earth’s core could behave like giant lava lamps, with blobs of rock periodically rising and falling deep inside our planet. This could affect its magnetic field and cause it to flip. The way we made this discovery was by studying signals from some of the world’s most destructive earthquakes.

Around 3,000km below our feet – 270 times further down than the deepest part of the ocean – is the start of the Earth’s core, a liquid sphere of mostly molten iron and nickel. At this boundary between the core and the rocky mantle above, the temperature is almost 4,000℃ degrees, similar to that on the surface of a star, with a pressure more than 1.3m times that at the Earth’s surface.

On the mantle side of this boundary, solid rock gradually flows over millions of years, driving the plate tectonics that cause continents to move and change shape. On the core side, fluid, magnetic iron swirls vigorously, creating and sustaining the Earth’s magnetic field that protects the planet from the radiation of space that would otherwise strip away our atmosphere.

Because it is so far underground, the main way we can study the core-mantle boundary is by looking at the seismic signals generated by earthquakes. Using information about the shape and speed of seismic waves, we can work out what the part of the planet they have travelled through to reach us is like. After a particularly large earthquake, the whole planet vibrates like a ringing bell, and measuring these oscillations in different places can tell us how the structure varies within the planet.







New model Earth? Shutterstock In this way, we know there are two large regions at the top of the core where seismic waves travel more slowly than in surrounding areas. Each region is so large that it would be 100 times taller than Mount Everest if it were on the surface of the planet. These regions, termed large-low-velocity-provinces or more often just “blobs”, have a significant impact on the dynamics of the mantle. They also influence how the core cools, which alters the flow in the outer core. 

Several particularly destructive earthquakes over recent decades have enabled us to measure a special kind of seismic oscillations that travel along the core-mantle boundary, known as Stoneley modes. Our most recent research on these modes shows that the two blobs on top of the core have a lower density compared to the surrounding material. This suggests that material is actively rising up towards the surface, consistent with other geophysical observations. 

These regions might be less dense simply because they are hotter. But an exciting alternative possibility is that the chemical composition of these parts of the mantle cause them to behave like the blobs in a lava lamp. This would mean they heat up and periodically rise towards the surface, before cooling and splashing back down on the core.

Such behaviour would change the way in which heat is extracted from the core’s surface over millions of years. And this could explain why the Earth’s magnetic field sometimes reverses. The fact that the field has changed so many times in the Earth’s history suggests that the internal structure we know today may also have changed.

We know the core is covered with a landscape of mountains and valleys like the Earth’s surface. By using more data from Earth oscillations to study this topography, we will be able to produce more detailed maps of the core that will give us a much better understanding of what is going on deep below our feet.


----------



## ae1905

*Why these researchers think dinosaurs were minutes away from surviving extinction*

washingtonpost.com 


[HR][/HR] The asteroid that wiped out the dinosaurs 66 million years ago left a massive crater off the coast of Mexico's Yucatan Peninsula. Scientists are finally drilling into the Chicxulub crater to see what secrets it holds. 

Scientists are finally drilling into the Chicxulub crater to see what secrets it holds. 

(Jenny Starrs/The Washington Post) 

The asteroid that wiped out the dinosaurs 66 million years ago left a massive crater off the coast of Mexico's Yucatan Peninsula. Scientists are finally drilling into the Chicxulub crater to see what secrets it holds. (Jenny Starrs/The Washington Post) 

From our standpoint 66 million years later, it's easy to assume the demise of the dinosaurs was an inevitability.

But an international team of researchers is making a radical argument for why that may not be the case: Had the asteroid that likely wiped out the dinosaurs slammed into the planet a few minutes earlier or later, the scientists say, the fabled reptiles could still be walking the earth now.

That conclusion makes up one of the most intriguing revelations in “The Day the Dinosaurs Died,” a BBC Two documentary that was filmed across three continents during the past year before airing this week.

How is it possible dinosaurs could still be alive?

If the massive asteroid that smashed into present-day Yucatan hit the Atlantic Ocean or somewhere else, the scientists maintain, the rock would have avoided an area made up primarily of limestone and evaporated ocean sediments and rich in carbon dioxide, sulfur and deadly gypsum. Due to the earth's rotation, even a minute or two could have significantly changed the outcome of the impact.

It was, for all intents and purposes, a kill shot for the giant reptiles roaming the planet.
“When the asteroid hits with the force of something like 10 billion Hiroshima explosions, all of that gets pumped up in the atmosphere, and it may have been really critical for the mass extinction that followed as it blocked out the sun,” Sean P. Gulick, a University of Texas professor who studies catastrophism in the geologic record, told The Washington Post. “A few minutes earlier or later and the asteroid would've hit the Atlantic or the Pacific Ocean and not slammed into a big, volatile platform that was then vaporized as it spread upward and out.”
_ [How T. rex’s powerful bite crushed dino bones to a pulp] 
_ 
Known as the Chicxulub crater, the impact zone lies 24 miles off the Yucatán Peninsula in Mexico. The impact left in its wake a hole in the Earth 20 miles deep and 120 miles across, scientists say, a site that is now covered completely by 66 million years worth of solid rock and sediment.

To reach their shocking conclusion, the scientists drilled through that rock and into the site of impact crater more than 1,300 meters below the seafloor. Gulick, who appears in the BBC Two program, said drilling into the crater is something he's been pushing for, with grant proposals and lobbying, for more than 15 years.

“The idea was a little outside of the box,” he said. “When scientists are seeking funding, most of the time people are going after some question about past climates or earthquakes or some very fundamental ocean earth science topic, but we were saying we wanted to drill into an impact crater, which has a different ring to it.”

“It just so happens that this particular crater had an extremely important role in the history of our planet,” he added.

Though many scientists say the impact of an asteroid caused many dinosaurs to vanish, the idea remains a widely accepted theory. Using seismic images that showed researchers where they could find the crater's central impact zone, known as the “peak ring,” the scientists said they were looking for physical evidence to bolster the theory.

With that in mind, they had three different goals:

1. Better understanding physical processes that shape impact craters.
2. Investigating the different “kill mechanisms” in place, such as the type of material released into the atmosphere, that may have caused the extinction of the dinosaurs.
3. Studying the microbial life that moved into the subsurface in the wake of the impact.
Eight weeks of intensive drilling were required to collect more than 260 rock cores, which were extracted and taken to the University of Bremen in Germany for examination, according to BBC Two.

That analysis — requiring 800 meters of rock being split, tested and photographed — resulted in some extraordinarily detailed insights. The scientists believe they have proved that the asteroid that smashed into the Yucatan Peninsula was moving at 40,000 mph and instantly vaporized upon hitting the water.

It was, BBC Two notes, the equivalent of a grain of sand slamming into a bowling ball, but the impact was so powerful and hot that it turned the surrounding sea to steam and traveled miles into the earth's crust. The rock that was pushed upward, the scientists found, formed “a tower higher than the Himalayas” before collapsing to “form a strange ring of peaks that exists today,” according to BBC Two.

All of it, the researchers found, took place in the space of 10 minutes.
“It's an amazing oceanographic event, even more so because we see in the cores that life came back pretty quickly,” Gulick said. “We discovered that organisms started to evolve within the sea floor at the crater within a few tens of thousands of years — we know for certain by 30,000 years.”

_ [‘Rare as winning the lottery’: New dinosaur fossil so well-preserved it looks like a statue] _ 
What followed immediately after the impact was a scene reminiscent of a modern-day nuclear holocaust, mixed with profound natural disasters on a mind-boggling scale.

A radioactive fireball that reached 18,000 degrees scorched the Earth for 600 miles in every direction and unleashed the largest tsunami in history, Gulick said. A deadly vapor containing billions of tons of sulfates fanned out over the globe, blocking sunlight and lowering temperatures, while molten material from the crater rained down upon the Earth for thousands of miles in every direction, starting fires and turning the atmosphere into an oven, according to BBC Two.

Ben Garrod, an evolutionary biologist who appears in the program, said global temperatures plunged more than 50 degrees within days.

“This is where we get to the great irony of the story — because in the end it wasn’t the size of the asteroid, the scale of blast, or even its global reach that made dinosaurs extinct — it was where the impact happened,” Garrod said.

“In this cold, dark world, food ran out of the oceans within a week and shortly after on land,” he added. “With nothing to eat anywhere on the planet, the mighty dinosaurs stood little chance of survival.”

The dinosaurs' sudden ending did have an upside, according to Alice Roberts, a professor of public engagement in science at the University of Birmingham, who appears throughout the documentary.

“Just half a million years after the extinction of the dinosaurs and landscapes around the globe had filled with mammals of all shapes and sizes,” she said. “Chances are, if it wasn’t for that asteroid, we wouldn’t be here to tell the story today.”


----------



## ae1905

*Tinder for T. rex: Experts helped us write dating profiles for dinosaurs*

washingtonpost.com https://www.facebook.com/sarah.kaplan.31

[HR][/HR] The moment I read the phrase, “_Tyrannosaurus rex_ was a sensitive lover, new dinosaur discovery suggests,” I thought it sounded like the opening line to a dinosaur's Tinder profile.

Turns out it was just the headline on a Guardian article covering new research suggesting that T. rex dinosaurs had hypersensitive snouts that could have been used in mating.

But I rather like the idea of a dating profile for a dinosaur. So, in a fit of caffeine-induced absurdity, I decided to write one myself. I managed to talk Emily Chow, one of The Washington Post's top-notch designers, into making it look like a real Tinder profile.











But a dating app is no use to a lonely dino if he's the only guy on it. So I emailed a bunch of paleontologists and asked whether they would be willing to create a profile for their favorite dinosaurs.

Turns out, crafting a profile that will charm a dinosaur is even harder than trying to date a human. There's a lot scientists don't know about dinosaur lifestyles — whether a given species lived in herds or alone, how often they mated and with whom, whether they cared for their young — so it's hard to tell what would appeal to them.

But paleontologists are a pretty resourceful bunch. Not to mention hilarious (and surprisingly raunchy). Here's how they would attempt to woo a dinosaur mate. Which would you swipe right on?

*Dreadnoughtus schrani: *One of history's largest land animals, this gigantic South American sauropod was discovered in 2014.

Full-bodied sauropod, enjoys standing and eating. Turnoffs: Interrupting me while I’m eating; things I can’t eat; gravity. If you’re into to doing terrible things to ferns, drop me a line and we’ll defoliate together.

— Kenneth Lacovara, paleontologist at Rowan University

*Anzu wyliei: *A gigantic oviraptor species unofficially known as “the chicken from Hell.”
SD > ND > MT. Snacks on fruit, lizards, mammals, and Triceratops eggs. Likes flashy wing and tail plumage and a great head crest. Daddy to 22 beautiful chicks. 7' 5" so you gotta be tall. No comparisons to poultry please.  LOL

— Matthew Lamanna, assistant curator of vertebrate paleontology at the Carnegie Museum of Natural History

*Parasaurolophus walkeri: *A North American duck-billed dinosaur with nasal passages that may have produced a swan-like honk and an elaborate head crest that could have been used as a resonating chamber to magnify the noises.

I’ll sing you a song of the dinoland. I am the best tooter on my block. Applying for Julliard next year. Although some of my best work may sound like farting noises, I think I just have a new sound that is too fresh for some. I am just misunderstood. But I promise if you let me mate with you, I will help watch the eggs 20% of the time.

— Carrie Levitt-Bussian, paleontology collections manager at the Natural History Museum of Utah

*Oviraptor: *A genus of small birdlike dinosaurs that lived in Mongolia during the late Cretaceous.

I am new to Mongolia and I'm looking for my partner in crime. I love to run, hunt, and currently working on some mating rituals — perhaps you can critique my mating dance and feather displays  I consider myself a feminist — I have no problem brooding eggs while you're out with your friends or at work! And yes, I do preen my feathers regularly!
— Eric Gorscak, paleontologist at the Field Museum of Natural History

*Tyrannosaurus rex: *Looks like my T. rex has some competition. (Who are we kidding? This dude is definitely out of my league.)
Fitness-minded apex predator with plenty of “rex” appeal looking for a tyrant lizard queen. Let's grab Triceratops tacos and watch the sun set over the Western Interior Seaway.
About me: love whiskey, travel, and working out. Biceps looking great but have some trouble with pushups. Can't run faster than 10 mph, but then again, neither can you. Eggs in the picture are my sister's.

The asteroid is coming so I'm not looking for anything serious. Basically just DTF (Down To Fossilize), but I'm cool to hang out and rub snouts afterwards. Not into vegetarians, smokers, drama, middle-age women. Please be under 5 tons.

13 feet tall because apparently that's important to you ladies...

— Sarah Werning, paleontologist at Des Moines University

*Velociraptor: *A genus of small, swift, probably feathered dinosaurs.
Looking for a “clever girl”? I'm small but fierce and on the hunt for a mate. Serious applicants only. Mess with me, and I'll bring out the claws.

— Brian Cleveland, copy editor for The Washington Post

Do you know an extinct species in need of someone special? Send us your best dinosaur dating profiles and we'll share them here.


----------



## ae1905

*A Peculiar Star Is Doing Peculiar Things, Again*

blogs.discovermagazine.com

[HR][/HR] Infrared: IPAC/NASA Ultraviolet: STScI (NASA)

There’s a star 1,300 light years away that has exhibited some of the strangest behavior ever seen: something dims 20 percent of its light, something that is beyond the size of a planet. It’s called KIC 8462852, but most people shorthand it Tabby’s Star, or Boyajian’s Star for its discoverer, Tabitha Boyajian.

Here’s the thing, though. Absolutely nobody knows why it’s dimming that much. It could be a massive fleet of comets or the debris of a planet. But it’s not giving off much infrared excess, which is a sort of “heat glow” from reflected starlight. And now, it seems to be dimming again, either helping or complicating the search for a solution.

Boyajian and co-investigator Jason Wright first put out the alert, hoping to garner observations from telescopes worldwide. They’re hoping at least one of telescope can grab spectra from the star to see what is causing the dimming.

CALL TO ACTION!! https://t.co/ll69iRowqi
— Tabetha Boyajian (@tsboyajian) May 19, 2017

​So far the dimming is at 2-3 percent, meaning the transit of … something is just starting. Tabby’s Star has a dedicated telescope waiting to find such an event, so the big observation period could yield further clues to what’s occurring.

Ok, it’s time we tell you: some people think it’s aliens. The hypothesis, put forth by Wright, states that in the absence of a good hypothesis, all avenues must be explored, and that includes giant Dyson Swarm machines harnessing the power of the star. Gathering the spectra could help rule that out or bolster the case for that “all other avenues exhausted” scenario.

Here’s the thing, too: you can get in on the action. Amateur astronomers use smaller scopes to track the star, which is bigger and older than the Sun. It’s at around 12th magnitude in the direction of Cygnus. So get out there tonight and hunt for some aliens.

_This article originally appeared in Astronomy.com. _


----------



## ae1905

*April marked the 388th month in a row that the global temperature was warmer than ave*

blogs.discovermagazine.com

[HR][/HR] 
To find a month when the global average temperature over the land and oceans was below average, you have to go all the way back to December 1984, according to the latest monthly analysis from the National Oceanic and Atmospheric Administration.

Including April 2017, that makes it 388 straight months in which the global temperature has been warmer than the 20th century average.

Like NASA’s independent analysis released earlier this week, NOAA finds that last month was the second warmest April in records dating back to 1880.

SEE ALSO: *The heat goes on: this past April was second warmest in records dating back to 1880 — as were February and March
*
From NOAA’s monthly global climate report, released today:

Warmer-than-average temperatures during the month were observed across much of the world’s land surfaces, with the most notable warm temperature departures from average across the Northern Hemisphere higher latitudes, specifically across much of central and eastern Asia, Alaska and the eastern half of the contiguous U.S., where temperatures were 3.0°C (5.4°F) above average or higher. Several locations across Russia’s Far East had record warm temperatures during April 2017.
​As the following map shows, there were some regions of the globe that experienced cooler than normal temperatures in April:

Most notable, according to NOAA, was northern Canada. Here temperatures were 3.6°–5.4°F below average or lower.

Even so, no land areas of the globe experience record cold in April.

Over the long run, human-caused global warming has loaded the dice, making unusual warmth much more likely than unusual cold. And that has had palpable impacts.

For example, between 1951 and 1980, much less than 1 percent of the Northern Hemisphere’s land areas experienced extreme heat during summer. By the first decade of the 20th century, extreme summertime heat typically was covering 10 percent of the land areas.


----------



## ae1905

*What makes chocolate so deliciously melty in your mouth?*

blogs.discovermagazine.com

[HR][/HR] _Mmmmm…. chocolate! It’s not just the flavor that makes is so delicious, it’s also the rich texture in your mouth. But what factors lead to that smooth film that coats your mouth when you eat chocolate? If you think it’s simply melted cocoa butter, think again! According to this study, properties of both the chocolate and your saliva contribute to the “lubrication” of the chocolate as you chew it. These scientists measured the physical properties of molten chocolate mixed with either saliva or salty water (PBS) as well as “chocolate expectorated after chewing till the point of swallow” (yum!). They report that the cocoa butter, sugar particles, and saliva also play a role in developing the texture of chewed chocolate. We just hope the poor souls who were asked to spit out chocolate before swallowing were well compensated!
_
*Lubrication of chocolate during oral processing.
*
“The structure of chocolate is drastically transformed during oral processing from a composite solid to an oil/water fluid emulsion. Using two commercial dark chocolates varying in cocoa solids content, this study develops a method to identify the factors that govern lubrication in molten chocolate and saliva’s contribution to lubrication following oral processing. In addition to chocolate and its individual components, simulated boluses (molten chocolate and phosphate buffered saline), in vitro boluses (molten chocolate and whole human saliva) and ex vivo boluses (chocolate expectorated after chewing till the point of swallow) were tested. The results reveal that the lubrication of molten chocolate is strongly influenced by the presence of solid sugar particles and cocoa solids. The entrainment of particles into the contact zone between the interacting surfaces reduces friction such that the maximum friction coefficient measured for chocolate boluses is much lower than those for single-phase Newtonian fluids. 

The addition of whole human saliva or a substitute aqueous phase (PBS) to molten chocolate dissolves sugar and decreases the viscosity of molten chocolate so that thinner films are achieved. However, saliva is more lubricating than PBS, which results in lower friction coefficients for chocolate-saliva mixtures when compared to chocolate-PBS mixtures. A comparison of ex vivo and in vitro boluses also suggests that the quantity of saliva added and uniformity of mixing during oral processing affect bolus structure, which leads to differences in measured friction. It is hypothesized that inhomogeneous mixing in the mouth introduces large air bubbles and regions of non-emulsified fat into the ex vivo boluses, which enhance wetting and lubrication.”


----------



## ae1905

*Climate Change is Turning Antarctica Green, Say Researchers*

Researchers in Antarctica have discovered rapidly growing banks of mosses on the ice continent's northern peninsula, providing striking evidence of climate change in the coldest and most remote parts of the planet. Amid the warming of the last 50 years, the scientists found two different species of mosses undergoing the equivalent of growth spurts, with mosses that once grew less than a millimeter per year now growing over 3 millimeters per year on average, _(the link could be paywalled; alternative source below)_ the Washington Post reported on Thursday. From a report:_ 

"Antarctica is not going to become entirely green, but it will become more green than it currently is," said Matt Amesbury, co-author of the research from the University of Exeter. "This is linking into other processes that are happening on the Antarctic Peninsula at the moment, particularly things like glacier retreat which are freeing up new areas of ice-free land -- and the mosses particularly are very effective colonisers of those new areas," he added. In the second half of the 20th century, the Antarctic Peninsula experienced rapid temperature increases, warming by about half a degree per decade. Plant life on Antarctica is scarce, existing on only 0.3% of the continent, but moss, well preserved in chilly sediments, offers scientists a way of exploring how plants have responded to such changes._


----------



## ae1905

*Polar eye candy: check out this spectacular aerial photo of a Greenlandic fjord from*

blogs.discovermagazine.com 

*NASA's Operation IceBridge*


[HR][/HR] *PLUS: a gallery of other compelling images from the mission*

A fjord in southern Greenland, as seen during Operation IceBridge. (Source: NASA/John Sonntag)

I’m always looking for cool imagery to use here at ImaGeo, and today I stumbled on this photo.

It’s of a fjord in southern Greenland, taken during Operation IceBridge’s final flight of the 2017 Arctic campaign, on May 12, 2017. Fractured sea ice floats between the towering cliffs, with a glacier visible in the far distance at the head of the fjord.
NASA posted the image here today. I’ve done some modest processing to correct a weird color cast in the original.

As NASA puts it:
IceBridge, a six-year NASA mission, is the largest airborne survey of Earth’s polar ice ever flown. It will yield an unprecedented three-dimensional view of Arctic and Antarctic ice sheets, ice shelves and sea ice. These flights will provide a yearly, multi-instrument look at the behavior of the rapidly changing features of the Greenland and Antarctic ice.
​Here are a few of my other favorite images taken during the multi-year project:
A lenticular cloud appears to hover above pressure ridges in the sea ice near Mount Discovery in Antarctica. (Photograph: Courtesy Michael Studinger/NASA)

I originally published the image above back in December of 2013. Shot by Michael Studinger of the IceBridge project, it’s a photograph of a phantasmagorical lenticular cloud hovering above the sea ice near Antarctica’s Mt. Discovery. In the middle distance is a jumble of jagged ice — a pressure ridge shoved up by the ever shifting sea ice.

*SEE ALSO: Phantasm of the Antarctic Atmosphere
*
Here’s another IceBridge aerial photo from Antarctica:
The Transantarctic Mountains. (Source: NASA)

This image was photographed from a NASA P-3 airborne laboratory on Nov. 27, 2013, toward the end of the 2013 IceBridge Antarctic campaign. I love the multi-colored horizontal patterning of the rock intersected by snow-filled cracks.

[video=youtube_share;1FjXXMSfTxg]https://youtu.be/1FjXXMSfTxg"]http://blogs.discovermagazine.com/imageo/files/2017/05/Lance-1.png[/IMG[/video]The Norwegian research vessel R/V Lance, photographed during an Operation IceBridge flight on March 19, 2015. Click on the image to watch a video of the overflight. (Source: NASA)

Last but definitely not least is this photo of the Lance, a former sealing ship converted into a Norwegian research vessel. The crew of the Lance froze themselves into the Arctic sea ice north of Norway’s Svalbard archipelago starting in January of 2015 as part of the N-ICE2015 expedition.

They drifted with the ice taking a wide range of scientific measurements until June, when the floe to which they anchored the Lance broke up during the start of the warm season. The photograph was taken on March 19 during an IceBridge overflight. (Click on it for a video.)


----------



## ae1905

*Why Do Flamingos Stand on One Leg?*

blogs.discovermagazine.com 

[HR][/HR] A young flamingo demonstrates it’s passive, one-legged stance. _(Credit: Rob Felt/Georgia Tech)_

Flamingos are striking not only for their brilliant pink plumes, but for how they often stand on a single slender leg, even when asleep.

Now scientists find that standing on one leg may counter-intuitively require less effort for flamingos than standing on two. It’s a finding that could help lead to more stable legged robots and better prosthetic legs.

*The One-Legged Problem*

One prior explanation for the mystery of why flamingos stand on one leg is that it conserved body heat, as doing so places one less leg in the cool water where they feed. Another possibility that scientists raised over the years was that such a stance reduced muscle fatigue, but until now, researchers had not directly explored how much muscle activity the birds needed to balance on one leg.

Neuromechanist Lena Ting at Emory University and her team previously examined how people standing on one leg kept their balance, which led comparative neuromechanist Young-Hui Chang at the Georgia Institute of Technology to suggest investigating flamingos. Chang’s prior work in zoos helped them conduct research on the birds at Zoo Atlanta.

To help the scientists better understand flamingo anatomy, “we were very fortunate to get two frozen flamingo cadavers donated to us by another zoo,” Chang said. However, when driving the birds back to the lab, Chang recalled that he “had not anticipated how long their legs would be. My cooler was not big enough! I spent the whole drive back to Atlanta nervous about them thawing out. But I worried for nothing. The insulation from the feathers is amazing — if anything, I had trouble thawing them out for our study and had to use heat lamps for several hours to get them thawed out.”
Chang and Ting with their research subjects. _(Credit: Rob Felt/Georgia Tech)_

As the scientists dissected the cadavers, Chang recalled picking up a bird by its leg below the knee and finding the body kept “surprisingly stable. Even across wide angles of tilt — up to 75 degrees pitching the body backward — the knee and hip did not seem to budge.”
The flamingo cadavers showed that the birds could passively support their body weight on one leg without any muscle activity.

“That was the ‘Aha!’ moment when we knew we were on to something special,” Chang said. “If a dead flamingo could do it, then it is probably available for live birds to do.”

Surprisingly, cadaver flamingos could not stably hold a two-legged pose. This suggested it takes more active effort from muscles for the birds to stabilize a two-legged stance than a one-legged one, Ting said.

In addition, the researchers analyzed eight live juvenile flamingos as the birds stood on a plate that measured forces they generated on the ground. They found the flamingos swayed less as they became less active.

“When the birds closed their eyes and fell asleep on one leg, presumably with very little muscle activity, their postural sway was seven times lower compared to when they were very active,” Chang said.

*Passive Balance*

_(Credit: Shutterstock)_

These findings suggest that flamingos rely on passive mechanisms instead of active muscular effort to to support their body and control their balance as they stand on one leg.

“We still don’t know what anatomical mechanisms are engaged in the one-legged stance that allows for this, but as far as we can tell, it is related to the skeletal anatomy,” Chang said. “We would need to directly image the skeletal anatomy during this behavior — for example with X-rays — to really get at it, which is a direction for future research.”

The researchers suggest that a flamingo standing on one leg resembles a vertical, balanced, upside-down pendulum. Such a pendulum could in principle keep balance with little to no muscle activity. “I think the very idea that one-legged stance may cost less energy than standing on two legs is very exciting,” Chang said.

“It is counterintuitive, because we typically view standing on one leg as being very difficult and we tend to equate difficult things with consuming more energy.”

This research could help explain why many other bird species stand on just one leg, the researchers said.

“It may also have some important applications for inspiring the design of more efficient legged robots and powered prosthetic devices,” Chang said.

It remains uncertain whether flamingos stand on one leg to reduce muscle activity, save body heat, both, or neither. The researchers need more direct measurements of muscle activity and heat loss in live birds to further test these ideas, Ting said.

Chang and Ting detailed their findings online May 24 in the journal _Biology Letters, _but the research was carried out during Chang and Ting’s spare time, without direct funding.
“It was a labor of love — doing science simply for the sake of learning how nature works,” Chang said.


----------



## ae1905




----------



## ae1905

*Egyptian mummy DNA shows Mediterranean, Turkish and European ancestry*

washingtonpost.com By Ben Guarino

[HR][/HR] Ancient Egyptians were an archaeologist's dream. They left behind intricate coffins, massive pyramids and gorgeous hieroglyphs, the pictorial writing code cracked in 1799. Egyptians recorded tales of royalty and gods. They jotted down life's miscellanies, too, as humdrum as beer recipes and doctor's notes.

But there was one persistent hole in ancient Egyptian identity: their chromosomes. Cool, dry permafrost can preserve prehistoric DNA like a natural freezer, but Egypt is a gene incinerator. The region is hot. Within the mummies' tombs, where scientists would hope to find genetic samples, humidity wrecked their DNA. What's more, soda ash and other chemicals used by Egyptian embalmers damaged genetic material.

A study led by researchers at the Max Planck Institute for the Science of Human History and the University of Tubingen in Germany managed to plug some of those genetic gaps. Researchers wrung genetic material from 151 Egyptian mummies, radiocarbon dated between Egypt's New Kingdom (the oldest at 1388 B.C.) to the Roman Period (the youngest at 426 A.D.), as reported Tuesday in the journal Nature Communications.

Johannes Krause, a University of Tubingen paleogeneticist and an author of the study, said the major finding was that “for 1,300 years, we see complete genetic continuity.” Despite repeated conquests of Egypt, by Alexander the Great, Greeks, Romans, Arabs and Assyrians — the list goes on — ancient Egyptians showed little genetic change. “The other big surprise,” Krause said, “was we didn't find much sub-Saharan African ancestry.”

The remains came from Abusir el-Meleq, an ancient Nile community in the middle of Egypt. From the mummies the scientists extracted bone, teeth and soft tissue samples. (Although Egyptian embalmers removed the brains of the deceased, the scientists wrote that “in most cases, non-macerated mummy heads still have much of their soft tissue preserved.”)

The hard samples yielded the most DNA, perhaps because the teeth and bones were protected by soft tissue or because the embalming processes left tougher material intact. After preparing the samples in a sterilized room in Germany, the researchers bathed the samples in UV radiation for an hour to minimize contamination.

Ancient Egyptians were closely related to people who lived along the eastern Mediterranean, the analysis showed. They also shared genetic material with residents of the Turkish peninsula at the time and Europe.

Given Egypt's location at the intersection of Africa, Europe and Asia, and the influx of foreign rulers, Krause said he was surprised at how stable the genetics seemed to be over this period. The scientists were particularly interested in the change in ruling class at the turn of the first millennium. First came the Hellenistic dynasty, in the aftermath of Alexander the Great’s conquests, from 332 B.C. to 30 B.C., and then Roman rule from 30 B.C. to about 400 A.D. And yet the genetics of the Abusir el-Meleq community appeared to be unperturbed by shifting politics.

The scientists compared these ancient genetics with those of 100 modern Egyptians and 125 modern Ethiopians that had been previously analyzed. If you ask Egyptians, they'll say that they have become more European recently, Krause said. “We see exactly the opposite,” he said.

It was not until relatively recently in Egypt's long history that sub-Saharan genetic influences became more pronounced. “In the last 1,500 years, Egypt became more African, if you want,” Krause said.

In their paper, the researchers acknowledged that “all our genetic data were obtained from a single site in Middle Egypt and may not be representative for all of ancient Egypt.” In the south of Egypt, the authors wrote, sub-Saharan influences may have been stronger.

This study left two gaps in the Egyptian timeline that Krause wants to fill, he said. It is not clear when the African gene flow, present in modern Egyptians, occurred. Nor could the study determine the origin of the Egyptians. “The other big question is, 'Where did the ancient Egyptians come from?' ” Krause said. To answer that, scientists will have to find genomes “back further in time, in prehistory.”

*Read more*:
This tiny fetus is the youngest ancient Egyptian mummy ever found
Happy anniversary, Ötzi: 25 years later, we’re still obsessed with the Iceman
New study on Ötzi the Iceman reveals humanity’s intimate affair with one microbe


----------



## ae1905

*Tree-Climbing Goats Keep the 'Desert Gold' Growing*

blogs.discovermagazine.com 
[HR][/HR] Goats grazing on an argan tree in southwestern Morocco. In the fruiting season, many clean argan nuts are spat out by the goats while chewing their cud. _(Credit: H. Garrido/EBD-CSIC)_

What do goats and squirrels have in common?

They both climb trees, of course. While squirrels live amongst the branches, goats, or at least those in arid regions, climb them for dinner. And that’s good for the goats, and the trees.

Scientists have discovered that the domesticated goats in southern Morocco benefit the argan trees, _Argania spinosa,_ by spitting out the seeds of the fruits they eat, which helps in seed dispersal. Argan trees play an important role in southern Morocco acting as a barrier for the Sahara Desert, and providing locals with wood, food, medicine and other materials. Argan oil, sometimes called “desert gold,” has also emerged as an international luxury commodity, prized for its supposed anti-aging and conditioning properties for hair and skin. 

Tree-climbing goats play a crucial role dispersing nuts from argan trees, ensuring the success of future generations of this valuable resource. But how, exactly, do these goats get the job done?

“For plants, there are well-known reproductive benefits with dispersing their seeds far from the maternal plant, including seed and seedling survival,” the scientists — Miguel Delibes, Irene Castaneda and Jose M. Fedriani — write in their paper published in May in the journal _Frontiers in Ecology and the Environment_.
Goats in southern Moracco climb argan trees to eat their fruit and leaves _(Credit: imagebroker/Rex/Shutterstock)_

Domesticated goats in grassy, temperate climates don’t climb trees because their food is literally under hoof. But in hot regions where grasses are patchy, like Africa, Mexico and some parts of Europe, goats leap upon green shrubs and squat in trees for sustenance.

In southern Morocco, goats climb 30 feet to the treetops. During autumn, when ground vegetation is scarce, three-quarters of their foraging time is spent in the ubiquitous argan trees.






Biologists are aware that ruminants (goats, cows, sheep, deer, etc.) spread seeds through defecation. But the paper’s authors say that is not the only way to sow them. They have chronicled seed spitting among sheep and deer in Spain. So it seemed likely to them that the Moroccan goats were spitting as well, especially because the seeds of argan trees are large and probably difficult to pass through the goats’ intestines.

Like other ruminants, goats have multiple stomachs. They regurgitate the contents of the rumen, the first stomach stop, and chew on it some more. During this process, the goats spit out the argan seeds —sometimes days later and far from the parent tree, the researchers discovered.

The message to other scientists studying animal seed dispersal? Don’t just study the dung.


----------



## ae1905

*X-ray Blast Produces a 'Molecular Black Hole'*

blogs.discovermagazine.com 

[HR][/HR] The LCLS Coherent X-ray Imaging Experimental Station. _(Credit: Nathan Taylor/SLAC)_

When researchers want to take pictures of very small things, like individual molecules, they have to get creative.

When scales shrink to seemingly imperceivable levels, images must be captured using indirect techniques that record how the subject being photographed interacts with its environment. One way to do this is by observing how a beam of particles disperses around the object. Working backward, researchers can then infer what the object in question looks like.

*Beam Power*

The particle beams that do the heavy lifting for this kind of imaging require sophisticated equipment to create. At the SLAC National Accelerator Laboratory at Stanford University, their linear accelerator stretches out for two miles, focusing beams of charged electrons onto minuscule targets at extremely intense energies. In a paper published Tuesday in _Nature, _SLAC researchers observed peculiar behavior among atoms subjected to their X-ray beam, and they’re calling it a “molecular black hole.”

The Linac Coherent Light Source (LCLS) at SLAC is used to take pictures of organic molecules and biological processes that take place at scales of only a few atoms. A beam of electrons bounces off the molecules in a predictable way, giving researchers an idea of their structure. This happens in the brief instant before the sample is destroyed by the electron beam’s intense energy, something the researchers call “diffraction before destruction.” Understanding how the molecules behave as the beam passes through is critical to obtaining precise measurements.

*From The Inside Out*

Working with atoms of xenon and molecules containing iodine atoms, the researchers saw something unexpected occur. The beam ripped through the outer shells of the atoms and stripped away the innermost electrons, leaving a gaping void between the nucleus and the outer electrons. The overwhelmingly positive charge this created then sucked in all of the surrounding electrons with enough strength to not only gather its own electrons, but also steal them away from surrounding atoms.

As predicted by the laws of physics, this kind of electron theft doesn’t happen in nature because the forces involved are too great. Done fast enough, and with enough power, however, the naked nuclei overwhelm the grip of neighboring atoms and siphon off electrons, in a process, the researchers say, that is similar to a black hole consuming a star.

“When we have really, really intense X–rays like we do there’s enough X–rays that you knock out one electron and before there’s time for recombination you knock off another and then knock off another and so on and so forth,” says LCLS staff scientist and study co-author Sebastien Boutet. “What that ends up doing is stripping most of the inner shells and then that very highly charged molecule unexpectedly sucked in a bunch of electrons from neighboring atoms as a consequence.”

The molecular version doesn’t work the same way as a cosmic black hole, which relies on immense gravitational forces to suck in matter, but the observed effect is similar. Understanding how the beam interacts with atoms of this size, which often show up in their experiments, will help researchers fine-tune their images. The accelerator is currently undergoing an upgrade which will allow for a drastic increase in the number of beam pulses per second, expanding the machine’s imaging capacity.
The more precision researchers can achieve while working at scales of just a few hundred nanometers, the more they will see.


----------



## ae1905

*3rd Gravitational Wave Detection Is About Much More Than Black Holes*

blogs.discovermagazine.com
[HR][/HR] More than a year after detecting the first confirmed gravitational waves, researchers were busy at the Laser Interferometer Gravitational-wave Observatory (LIGO) in Livingston, La., upgrading the massive instrument. _(LIGO lab)_

Our sun was still dim. Waves crashed on martian beaches. Life was emerging on Earth.
That’s when the ghosts of two dead stars — black holes dozens of times more massive than our sun — merged in a far-off corner of the universe. In their final moments, these binary black holes were circling each other hundreds of times per second, as each one spun at 10 times that rate.

The rumbles of distant thunder from that collision reached Earth on Jan. 4 of this year, passing through the detector at the Laser Interferometer Gravitational-Wave Observatory (LIGO) in Hanford, Washington. Then, traveling at the speed of light, this wrinkle in space-time passed through LIGO’s second detector in Livingston, Louisiana, just a fraction of a second later.

The results were published Thursday in the journal _Physical Review Letters_.
*Cosmic Forces*

Gravity is the weakest among nature’s four fundamental forces. So only extreme cosmic events like supernovas, neutron stars and merging black holes can make detectable gravitational waves. The waves are so weak that they’d warp the distance between Earth and sun by just the width of a hydrogen atom. But as these waves pass through LIGO’s twin detectors, its enormous lasers can pick up on the truly tiny stretches and squeezes of space-time. You can think of it like a seismometer for measuring mini quakes in the cosmos’ gravitational fabric.

When LIGO gets a hit, the gravitational wave makes a characteristic signal that scientists’ call a “chirp” because of the sound it makes once translated into a format human ears can hear.

This was the third such detection since Albert Einstein first predicted gravitational waves a century ago as part of his general theory of relativity, or theory of gravity. Taken together, these observations form the first samples of a black hole census with far-reaching implications.

Before colliding, the binary black holes spotted earlier this year weighed in at 19 and 31 times our sun’s mass. After merging, the pair created a single black hole 49 times more massive than the sun. Einstein’s equations tell us that energy and mass are interchangeable. And so the missing solar mass worth of energy was radiated out across the universe as gravitational waves.

And with this detection, scientists for the first time think the two black holes might have been spinning in opposite directions. That could reveal clues about the lives of the stars that formed them. It’s possible that the two stars lived in a dense stellar cluster.
Before LIGO, astronomers didn’t know that so-called solar mass black holes, which form when stars die, could reach such extreme sizes.

This census can also help explain an enduring mystery in astronomy. Scientists have seen supermassive black holes that dominate entire galaxies, as well as small black holes that form after stars die. We even now know about so-called intermediate mass black holes weighing as much as thousands of suns. But how do these all form? Do many small black holes combine intro larger and larger behemoths? LIGO is just starting to piece together this puzzle. 
Astrophysicist Stuart Aston monitors external vibrations on the LIGO test mass mirrors during an engineering run in November 2016. _(Ernie Mastroianni/Discover)_

*More Than Black Holes*

The latest signal took nearly 3 billion light years to reach Earth — twice as far off as the other detections. And because the gravitational wave arrived undiminished, it provides yet another proof of one of Einstein’s theories, showing that gravity travels at light speed.

“LIGO is going to be about a lot more than black holes,” says University of Wisconsin-Milwaukee (UWM) physicist Jolien Creighton, a veteran member of the detection team.
The observatory has forced open a new window on the universe, allowing scientists to hear from distant cosmic reaches — places where conventional telescopes come up empty. LIGO will bring new insights into everything from the heaviest elements on Earth to the nature of gravity itself.

LIGO’s next big breakthrough is expected to come from detecting collisions of binary neutron stars — the corpses of dead stars that pack a sun’s worth of mass into a city-sized sphere. These mergers happen at similar wavelengths to the black hole collisions LIGO’s already seen, and scientists once expected to see neutron stars first.
“This paper only reports on a few weeks worth of data, and we plan to run until August,” says Chad Hanna, a LIGO scientist from Pennsylvania State University. “We might still detect more events.”

So it’s possible that a binary neutron star merger could still be seen this year, or after the LIGO collaboration upgrades its instruments over the coming years. An upgrade over last summer didn’t increase the instrument’s sensitivity quite as much as scientists’ hoped.

“A lot of the elements we see on Earth were not formed in exploding stars but formed in the collision of binary neutron stars,” Creighton says. Humans are mostly made of typical star stuff like carbon and hydrogen, but other earthly elements with high atomic numbers, like gold, are suspected to have come from these more exotic events.
“Most of the gold we see in the solar system might have come from a binary neutron star collision that produced something like a Jupiter mass of gold and dispersed it in all directions,” Creighton says.

LIGO will detect neutron star mergers and send out an alert to the larger astronomy community, telling researchers to all point their telescopes to that region of sky and catch the event. The observations will let scientists test theories under conditions that could never be recreated in a lab.
Inside a stainless steel chamber, LIGO technicians examine the surface of one of the test mass mirrors that will reflect infrared laser light to measure the effect of gravity waves. After installation, all air was vacuumed from this chamber. _(Mike Fyffe/LIGO lab)_ 

Physicists also hope that more observations from LIGO will reveal new insights into gravity itself, as well as the theorized force-carrying particle called the graviton. It is to gravity what the photon is to light. Like the photon, scientists suspect it too has no mass. And this third LIGO gravitational wave detection helped constrain how big the graviton could possibly be. But new tests are on the horizon as well.

“I’m really excited about testing general relativity,” says UWM physicist Sarah Caudill, who works with the computer clusters that make LIGO detections possible. She suspects LIGO could reveal Einstein’s theory needs some small corrections.

“I think most people would be surprised if general relativity was 100 percent correct, but there’s no evidence that it’s not yet. Einstein created this theory 100 years ago and with no ability to observe gravitational waves, so for him to be 100 percent correct would be quite a feat.”


----------



## ae1905

*theatlantic.com Scientists Have Found the Oldest Known Human Fossils*

theatlantic.com *Scientists Have Found the Oldest Known Human Fossils*

Ed Yong
8-10 minutes

[HR][/HR] Hundreds of thousands of years ago, around 62 miles west of what would eventually become Marrakesh, a group of people lived in a cave overlooking a lush Moroccan landscape. They rested there, building fires to keep themselves warm. They hunted there, sharpening stone tools to bring down animals. And they died there, leaving their bones behind in the dirt. At the time, there would have been nothing particularly notable about these cave-dwellers. They were yet more _**** sapiens_, members of a nascent ape species that had spread across Africa. But in their death, they have become singularly important.

That cave is now called Jebel Irhoud, and bones of its former occupants have been recently unearthed by an international team of scientists. They mark the earliest fossilized remains of _**** sapiens_ ever found. Until now, that honor belonged to two Ethiopian fossils that are 160,000 and 195,000 years old respectively. But the Jebel Irhoud bones, and the stone tools that were uncovered with them, are far older—around 315,000 years old, with a possible range of 280,000 to 350,000 years.

It’s not just _when_ these people died that matters, but _where_. Their presence in north Africa complicates what was once a tidy picture of humanity arising in the east of the continent. “What people, including myself, used to think was that there was a cradle of humankind in East Africa about 200,000 years ago, and all modern humans descend from that population,” says Philipp Gunz from the Max Planck Institute for Evolutionary Anthropology, who was involved in the new excavation. “The new finds indicate that _**** sapiens_ is much older and had already spread across all of Africa by 300,000 years ago. They really show that the African story of our species was more complex than what we used to think.”

“Humans had already migrated across the African landscape, and were evolving at a continental scale.”Jebel Irhoud rose to prominence in 1961, when miners turned the site into a quarry. They were looking for barite minerals, but to their surprise, they found a fossilized skull. Soon, they disinterred more bones: another skull, a child’s jaw, and fragments of arm bones and hips. From the start, these specimens were controversial. Their exact location was never recorded, which makes it very hard to work out their age. Scientists initially thought that they were the 40,000-year-old remains of Neanderthals—and were wrong on both counts. They’re much older, and they’re more likely to be _**** sapiens. 
_
After those discoveries, Jebel Irhoud was neglected. But in 2004, Jean-Jacques Hublin from the Max Planck Institute for Evolutionary Anthropology led a team back to the site, clearing away decades’ worth of accumulated debris in a search for more fossils. And after a few seasons of digging, they found some—a partial skull, fragments of facial bones, a nearly complete adult jawbone, and other bits and pieces from at least five individuals.

These people had very similar faces to today’s humans, albeit with slightly more prominent brows. But the backs of their heads were very different. Our skulls are rounded globes, but theirs were lower on the top and longer at the back. If you saw them face on, they could pass for a modern human. But they turned around, you’d be looking at a skull that’s closer to extinct hominids like _**** erectus_. “Today, you wouldn’t be able to find anyone with a braincase that shape,” says Gunz.

Comparison of the skulls of a Jebel Irhoud human (lleft) and a modern human (right) (NHM London)Their brains, though already as large as ours, must also have been shaped differently. It seems that the size of the human brain had already been finalized 300,000 years ago, but its structure—and perhaps its abilities—were fine-tuned over the subsequent millennia of evolution.

At Jebel Irhoud, the team also found several stone tools—small pieces of flint with sharp edges. Several of these had clearly been heated in the distant past, but not because their makers were deliberately burning the implements. More likely, “you can imagine that people were dropping stones on the ground, and later starting fires on top,” explains Shannon McPherron, an expert on stone tools who was involved in the new study.

The team exploited this incidental heating to date the tools. Over time, flint gradually builds up a small charge as it reacts to natural sources of radiation around it. That charge dissipates whenever it’s heated, before growing again. By testing the stones back in their lab, McPherron’s team could work out how much charge they had accumulated since they were last heated—which must have been when they were dropped in the caves. This technique, known as thermoluminescence, told them that the tools were roughly 280,000 and 350,000 years old.

Some of the Middle Stone Age stone tools from Jebel Irhoud (Mohammed Kamal / MPI EVA Leipzig)The team checked those dates by estimating the ages of the fossils. They first did that a decade ago, using the fossils collected in the 1960s, and they arrived at an age of 160,000 years. But that was based on imperfect guesses about the sediments in which the bones had been buried. This time, after taking careful readings from the site itself, the team could more accurately re-do their calculations. They got a much older date of 286,000 years, which matches well to the estimated age of the tools. “I think it’s a pretty tight picture,” says McPherron.

The new dates radically change the position of the Jehul Irhoud residents in the family tree of our species. Based on the earlier age estimates, scientists had always viewed these people as a primitive group of humans who were clinging on in North Africa while their more modern cousins were sweeping out of the East. “People thought that North Africa had nothing to do with modern human evolution, and that this was a relict population,” says Gunz. “Now we know that they’re close to the root of the _**** sapiens_ lineage.”

The new specimens cast fossils from other parts of Africa in a new light. For example, the so-called Florisbad skull, which was discovered in South Africa in 1932, is around 260,000 years old. Based on that old age, “people had a hard time accepting this as a member of _**** sapiens_, but I think our work brings the Florisbad skull back into the discussion,” says Gunz. If the skull really did belong to a member of our species, it means that around 300,000 years ago, humans had already “migrated across the African landscape, and were evolving at a continental scale,” says Gunz.

The team have done a good job, says Erella Hovers from the Hebrew University of Jerusalem, but “whether this is a breakthrough in our understanding of human evolution, I’m not sure.” Others had already suggested that the origin of our species was tied to the dawn of the Middle Stone Age—a period between 250,000 and 300,000 years ago, when people went from making large stone hand-axes to smaller, lighter tools like awls and spear tips. Those lighter tools had already been found in other parts of Africa, so the Jebel Irhoud finds “support a hypothesis that has been around for a while,” Hovers says.

That’s true, says McPherron, but until now, the bones and stones were telling different tales. The stones were all over Africa by 300,000 years ago, and the fossils were apparently no older than 195,000. Were the tools even made by _**** sapiens _or some other hominid? “We had a disjuncture,” he says. “We had a major transition in behavior but no biological transition to go with it. Jebel Irhoud fills that gap nicely.”

It’s possible that people spread all over Africa, aided by their new stone technology, which allowed them to kill large animals from a distance. Certainly the Sahara would have permitted their passage: At the time, it was a lush, green savannah and not the impassable desert of today. Alternatively, humans may have already spread throughout the continent, and regional innovators developed Middle Stone Age tools independently.

Regardless, the new finds are “a very important discovery,” says Zeray Alemseged from the University of Chicago. “They’re placed at a critical time period when the earliest members of our species could have evolved, and they’re critical for better understanding the patterns of physical and behavioral evolution [among humans] across the African continent. They confirm the pan-African nature of human ancestry.”


----------



## ae1905

*This worm grew a second head after a trip to space*

engadget.com 

It’s further confirmation that space travel permanently alters our bodies.
   

Swapna Krishna, @skrishna









SCIEPRO 

There are all kinds of experiments going on aboard the International Space Station, but they all probably don't produce results as strange as this one. An article published today in the journal _Regeneration_ details a recent experiment in which an amputated flatworm grew two heads -- twice.

Planarian flatworms are known to have extraordinary regenerative properties. They can regrow complex organs, including a central nervous system, from small pieces of their bodies. In order to study how space travel might affect these regenerative abilities, the scientists sent a group of worms up to the International Space Station; half were amputated, with their heads and tails cut off, and half were normal.

The worms stayed aboard the ISS for five weeks, after which they were observed for 20 months on Earth. This was when scientists noticed something very strange: One of the 15 amputated worms actually grew two heads. While this has occurred in fully Earth-bound worms, it's incredibly rare -- in 18 years of working with these worms, this specific research team has never encountered the phenomenon.

The team once again amputated the worm (seriously, what did this worm ever do to them?) and it grew _another double head_ from the cut end. This has led the scientists to hypothesize that space travel has made some fundamental change to the worm's regeneration abilities.

It's not quite clear yet what implications this study has for humans, though it does support the idea that our bodies are permanently changed by space travel. These worms will help further examine the risks of space travel, but even more important than that, they may help scientists discover how to extend worms' regenerative properties to humans. After all, being able to regrow a limb would be pretty useful in deep space travel.


----------



## ae1905

*The Maths of Life and Death: Our Secret Weapon in the Fight against Disease*

scientificamerican.com 

Christian Yates,The Conversation UK
[HR][/HR] _The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.
_
Maths is the language of science. It crops up everywhere from physics to engineering and chemistry – aiding us in understanding the origins of the universe and building bridges that won’t collapse in the wind. Perhaps a little more surprisingly, maths is also increasingly integral to biology.

For hundreds of years mathematics has been used, to great effect, to model relatively simple physical systems. Newton’s universal law of gravitation is a fine example. Relatively simple observations led to a rule which, with great accuracy, describes the motion of celestial bodies billions of miles away. Traditionally, biology has been viewed as too complicated to submit to such mathematical treatment.

Biological systems are often classified as “complex”. Complexity in this sense means that, due to the complicated interaction of many sub-components, biological systems can exhibit what we call emergent behaviour – the system as a whole demonstrates properties which the individual components acting alone cannot. This biocomplexity has often been mistaken for vitalism, the misconception that biological processes are dependent on a force or principle distinct from the laws of physics and chemistry. Consequently, it has been assumed that complex biological systems are not amenable to mathematical treatment.

There were some early dissenters. Famous computer scientist and World War II code-breaker Alan Turing was one of the first to suggest that biological phenomena could be studied and understood mathematically. In 1952 he proposed a pair of beautiful mathematical equations which provide an explanation for how pigmentation patterns might form on animals’ coats.

Not only was his work beautiful, it was also counter-intuitive – the sort of work that only a brilliant mind like Turing’s could ever have dreamed up. Even more of a pity, then, that he was so poorly treated under the draconian anti-homosexuality laws of the time. After a course of “corrective” hormone treatment, he killed himself just two years later.

*An emerging field*

Since then, the field of 



 has exploded. In recent years, increasingly detailed experimental procedures have lead to a huge influx in the biological data available to scientists. This data is being used to generate hypotheses about the complexity of previously abstruse biological systems. In order to test these hypotheses, they must be written down in the form of a model which can be interrogated to determine whether it correctly mimics the biological observations. Mathematics is the natural language in which to do this.

In addition, the advent of, and subsequent increase in, computational ability over the last 60 years has enabled us to suggest and then interrogate complex mathematical models of biological systems. The realisation that biological systems can be treated mathematically, coupled with the computational ability to build and investigate detailed biological models, has led to the dramatic increase in the popularity of mathematical biology.

Maths has become a vital weapon in the scientific armoury we have to tackle some of the most pressing questions in medical, biological and ecological science in the 21st century. By describing biological systems mathematically and then using the resulting models, we can gain insights that are impossible to access though experiments and verbal reasoning alone. Mathematical biology is incredibly important if we want to change biology from a descriptive into a predictive science – giving us power, for example, to avert pandemics or to alter the effects of debilitating diseases.

*A new weapon*

Over the last 50 years, for example, mathematical biologists have built increasingly complex computational representations of the heart’s physiology. Today, these highly sophisticated models are being used in an attempt to understand better the complicated functioning of the human heart. Computer simulations of heart function allow us to make predictions about how the heart will interact with candidate drugs, designed to improve its function, without having to undertake expensive and potentially risky clinical trials.

We use mathematical biology to study disease as well. On an individual scale, researchers have elucidated the mechanisms by which our immune systems battles with viruses through mathematical immunology and suggested potential interventions for tipping the scales in our favour. On a wider scale, mathematical biologists have proposed mechanisms that can be used to control the spread of deadly epidemics like Ebola, and to ensure the finite resources dedicated to this purpose are employed in the most efficient way possible.

Mathematical biology is even being used to inform policy. There has been research done on fisheries for example, using mathematical modelling to set realistic quotas in order to ensure we do not overfish our seas and that we protect some of our most important species.

The increased comprehension gleaned by taking a mathematical approach can lead to better understanding of biology at a range of different scales. At the Centre for Mathematical Biology in Bath, for example, we study a number of pressing biological problems. At one end of the spectrum, we try to develop strategies for averting the devastating effects of locust plagues comprising up to a billion individuals. At the other end, we try to elucidate the mechanisms that give rise to the correct development of the embryo.

Although mathematical biology has traditionally been the domain of applied mathematicians, it is clear that mathematicians who self-classify as pure have a role to play in the mathematical biology revolution. The pure discipline of topology is being used to understand the knotty problem of DNA packing and algebraic geometry is being used to select the most appropriate model of biochemical interaction networks.

As the profile of mathematical biology continues to rise, emerging and established scientists from disciplines across the scientific spectrum will be drawn to tackle the rich range of important and novel problems that biology has to offer.

Turing’s revolutionary idea, although not fully appreciated in his time, demonstrated that there was no need to appeal to vitalism – the god in the machine – to understand biological processes. Chemical and physical laws encoded in mathematics, or “mathematical biology” as we now call it, could do just fine.

_This article was originally published on The Conversation. Read the original article._


----------



## ae1905

*New evidence that all stars are born in pairs*

phys.org 
*June 14, 2017 by Robert Sanders*









Radio image of a very young binary star system, less than about 1 million years old, that formed within a dense core (oval outline) in the Perseus molecular cloud. All stars likely form as binaries within dense cores. Credit: SCUBA-2 survey image by Sarah Sadavoy, CfA Did our sun have a twin when it was born 4.5 billion years ago? 

Almost certainly yes—though not an identical twin. And so did every other sunlike star in the universe, according to a new analysis by a theoretical physicist from UC Berkeley and a radio astronomer from the Smithsonian Astrophysical Observatory at Harvard University.

Many stars have companions, including our nearest neighbor, Alpha Centauri, a triplet system. Astronomers have long sought an explanation. Are binary and triplet star systems born that way? Did one star capture another? Do binary stars sometimes split up and become single stars?

Astronomers have even searched for a companion to our sun, a star dubbed Nemesis because it was supposed to have kicked an asteroid into Earth's orbit that collided with our planet and exterminated the dinosaurs. It has never been found.

The new assertion is based on a radio survey of a giant molecular cloud filled with recently formed stars in the constellation Perseus, and a mathematical model that can explain the Perseus observations only if all sunlike stars are born with a companion.

"We are saying, yes, there probably was a Nemesis, a long time ago," said co-author Steven Stahler, a UC Berkeley research astronomer.

"We ran a series of statistical models to see if we could account for the relative populations of young single stars and binaries of all separations in the Perseus molecular cloud, and the only model that could reproduce the data was one in which all stars form initially as wide binaries. These systems then either shrink or break apart within a million years."
   
A radio image of a triple star system forming within a dusty disk in the Perseus molecular cloud obtained by the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. Credit: Bill Saxton, ALMA (ESO/NAOJ/NRAO), NRAO/AUI/NSF In this study, "wide" means that the two stars are separated by more than 500 astronomical units, or AU, where one astronomical unit is the average distance between the sun and Earth (93 million miles). A wide binary companion to our sun would have been 17 times farther from the sun than its most distant planet today, Neptune.

Based on this model, the sun's sibling most likely escaped and mixed with all the other stars in our region of the Milky Way galaxy, never to be seen again.

"The idea that many stars form with a companion has been suggested before, but the question is: how many?" said first author Sarah Sadavoy, a NASA Hubble fellow at the Smithsonian Astrophysical Observatory. "Based on our simple model, we say that nearly all stars form with a companion. The Perseus cloud is generally considered a typical low-mass star-forming region, but our model needs to be checked in other clouds."

The idea that all stars are born in a litter has implications beyond star formation, including the very origins of galaxies, Stahler said.

Stahler and Sadavoy posted their findings in April on the arXiv server. Their paper has been accepted for publication in the Monthly Notices of the Royal Astronomical Society.

*Stars birthed in 'dense cores'*
Astronomers have speculated about the origins of binary and multiple star systems for hundreds of years, and in recent years have created computer simulations of collapsing masses of gas to understand how they condense under gravity into stars. They have also simulated the interaction of many young stars recently freed from their gas clouds. Several years ago, one such computer simulation by Pavel Kroupa of the University of Bonn led him to conclude that all stars are born as binaries.
   
This infrared image from the Hubble Space Telescope contains a bright, fan-shaped object (lower right quadrant) thought to be a binary star that emits light pulses as the two stars interact. The primitive binary system is located in the IC …moreYet direct evidence from observations has been scarce. As astronomers look at younger and younger stars, they find a greater proportion of binaries, but why is still a mystery.

"The key here is that no one looked before in a systematic way at the relation of real young stars to the clouds that spawn them," Stahler said. "Our work is a step forward in understanding both how binaries form and also the role that binaries play in early stellar evolution. We now believe that most stars, which are quite similar to our own sun, form as binaries. I think we have the strongest evidence to date for such an assertion."

According to Stahler, astronomers have known for several decades that stars are born inside egg-shaped cocoons called dense cores, which are sprinkled throughout immense clouds of cold, molecular hydrogen that are the nurseries for young stars. Through an optical telescope, these clouds look like holes in the starry sky, because the dust accompanying the gas blocks light from both the stars forming inside and the stars behind. The clouds can, however, be probed by radio telescopes, since the cold dust grains in them emit at these radio wavelengths, and radio waves are not blocked by the dust.

The Perseus molecular cloud is one such stellar nursery, about 600 light-years from Earth and about 50 light-years long. Last year, a team of astronomers completed a survey that used the Very Large Array, a collection of radio dishes in New Mexico, to look at star formation inside the cloud. Called VANDAM, it was the first complete survey of all young stars in a molecular cloud, that is, stars less than about 4 million years old, including both single and mulitple stars down to separations of about 15 astronomical units. This captured all multiple stars with a separation of more than about the radius of Uranus' orbit—19 AU—in our solar system.

Stahler heard about the survey after approaching Sadavoy, a member of the VANDAM team, and asking for her help in observing young stars inside dense cores. The VANDAM survey produced a census of all Class 0 stars – those less than about 500,000 years old – and Class I stars – those between about 500,000 and 1 million years old. Both types of stars are so young that they are not yet burning hydrogen to produce energy.

Sadavoy took the results from VANDAM and combined them with additional observations that reveal the egg-shaped cocoons around the young stars. These additional observations come from the Gould Belt Survey with SCUBA-2 on the James Clerk Maxwell Telescope in Hawaii. By combining these two data sets, Sadavoy was able to produce a robust census of the binary and single-star populations in Perseus, turning up 55 young stars in 24 multiple-star systems, all but five of them binary, and 45 single-star systems.

Using these data, Sadavoy and Stahler discovered that all of the widely separated binary systems—those with stars separated by more than 500 AU—were very young systems, containing two Class 0 stars. These systems also tended to be aligned with the long axis of the egg-shaped dense core. The slightly older Class I binary stars were closer together, many separated by about 200 AU, and showed no tendency to align along the egg's axis.
   
A dark molecular cloud, Barnard 68, is filled with gas and dust that block the light from stars forming inside as well as stars and galaxies located behind it. These and other stellar nurseries, like the Perseus molecular cloud, can only be …more"This has not been seen before or tested, and is super interesting," Sadavoy said. "We don't yet know quite what it means, but it isn't random and must say something about the way wide binaries form."

*Egg-shaped cores collapse into two centers*
Stahler and Sadavoy mathematically modeled various scenarios to explain this distribution of stars, assuming typical formation, breakup and orbital shrinking times. They concluded that the only way to explain the observations is to assume that all stars of masses around that of the sun start off as wide Class 0 binaries in egg-shaped dense cores, after which some 60 percent split up over time. The rest shrink to form tight binaries.

"As the egg contracts, the densest part of the egg will be toward the middle, and that forms two concentrations of density along the middle axis," he said. "These centers of higher density at some point collapse in on themselves because of their self-gravity to form Class 0 stars."
"Within our picture, single low-mass, sunlike stars are not primordial," Stahler added. "They are the result of the breakup of binaries. "

Their theory implies that each dense core, which typically comprises a few solar masses, converts twice as much material into stars as was previously thought.

Stahler said that he has been asking radio astronomers to compare dense cores with their embedded young stars for more than 20 years, in order to test theories of binary star formation. The new data and model are a start, he says, but more work needs to be done to understand the physics behind the rule.

Such studies may come along soon, because the capabilities of a now-upgraded VLA and the ALMA telescope in Chile, plus the SCUBA-2 survey in Hawaii, "are finally giving us the data and statistics we need. This is going to change our understanding of dense cores and the embedded stars within them," Sadavoy said.


----------



## ae1905

*Jupiter is the oldest planet in the Solar System*

The king of the planets started taking shape almost as soon as the sun formed 4.6 billion years ago.

   

Mariella Moon, @mariella_moon









Lawrence Livermore National Laboratory 

Jupiter's ancient name really is well-deserved: according to a new study, the king of the planets isn't just the largest in the Solar System, it's also the oldest. A team of researchers from Lawrence Livermore National Laboratory in California and the University of Munster in Germany have determined that Jupiter's core was already 20 times the size of Earth merely 1 million years after the sun took shape 4.6 billion years ago. Since newborn stars tend to release energy that blows away gas and dust for planet formation, the gas giant must have had to absorb materials very, very fast.

The team came to the conclusion after testing for the presence and abundances of molybdenum and tungsten isotopes in some iron meteorites that fell to Earth. They found that the meteorites contained components from two distinct reservoir of materials, thanks to the data from the molybdenum isotopes. One reservoir has material from a different star than ours that didn't make it to the other reservoir. The data from the tungsten isotopes, on the other hand, showed that the two pools of materials were separated for 2 to 3 million years. In addition, they've been separated as early as a million years into the formation of the solar system.

The team explained that "the most plausible mechanism to efficiently separate two disk reservoirs for an extended period is the accretion of a giant planet in between them." Yes, that gas giant is Jupiter, and while its formation slowed as the years went by, it kept growing and growing enough to create a permanent barrier between the two pools. The researchers now believe that it could also be the reason why there are no super-Earths near the sun, which are commonly found in other star systems. That means we _could_ owe our existence to Jupiter, because who knows if and how life would flourish on Earth if it's too near other, more massive planets.


----------



## ae1905

blogs.scientificamerican.com *Jupiter Now Has 69 Moons*

Caleb A. Scharf

Our local gas giant has two more natural satellites added to its roster









_Credit: NASA, JPL-Caltech_ 

The planet Jupiter is a beast: Three-hundred-and-seventeen times the mass of the Earth, mostly made of metallic hydrogen, and at the center of an astonishing collective of orbiting natural bodies.

In fact, Jupiter's satellites form a shrunken version of a full planetary system: from the tightly bound larger Galilean moons (orbiting in their Laplacian mean-motion resonances, akin to places like TRAPPIST-1) to the remarkable array of smaller moonlets that encircle this world out to more than 30 million kilometers.
These bodies circle Jupiter in anywhere from about 7 hours to an astonishing 1,000 days. 

NASA's Juno spacecraft captured 



 of time lapse images of the large Galilean moons during the spacecraft's approach in early 2016:

(Credit: 



)

Until recently the cataloged satellites totaled 67 in number. But only the innermost 15 of these orbit Jupiter in a prograde sense (in the direction of the planet's spin). The rest are retrograde, and are likely captured objects - other pieces of the solar system's solid inventory that strayed into Jupiter's gravitational grasp.

That population of outer moons is mostly small stuff, only a few are 20-60 kilometers in diameter, most are barely 1-2 kilometers in size, and increasingly difficult to spot.

Now astronomers Scott Sheppard, David Tholen, and Chadwick Trujillo have added two more; bringing Jupiter's moon count to 69. 

These additions are also about 1-2 km in size, and were spotted in images that were part of a survey for much more distant objects out in the Kuiper Belt. Jupiter just happened to be conveniently close in the sky at the time. The moons are S/2016 J1 and S/2017 J1, and are about 21 million km and 24 million km from Jupiter.

By themselves these small satellites don't amount to much. But they are a vivid reminder of the sheer abundance of material out there in our solar system, and of Jupiter's royal gravitational status.


----------



## ae1905

*A Virtual Trick to Remove Racial Bias*

blogs.scientificamerican.comStephen L. Macknik
​








Example avatars used to determine how participants might behave when their race was swapped virtually. _Credit: Courtesy of Mel Slater_ 

A team of scientists has used full-body immersive virtual reality to dive deep into some of the potential contributors to racial bias. Beatrice S. Hasler, Bernhard Spanlang, and Mel Slater (affiliated with the Sammy Ofer School of Communications of Israel, the University of Barcelona and ICREA in Spain, and University College London in the UK) conducted a study that assigned two different VR bodies to each participant, one black and one white, to see how skin color affected their behavior. That’s a remarkable statement if you read it back slowly, so let’s first unpack its meaning.

The fundamental role of VR is to make you feel present in a place where you aren’t. This illusion—_presence_—is powerful and somewhat perplexing. How can I feel like I’m somewhere else just by watching a TV show? The answer to this lies in how consciousness is born in the brain.

Think about the place you are in right now. How much do you _know_ about it? You’ve used your sensory systems to collect information about the surfaces of objects around you: you may have looked at the ground, the walls and ceiling if there are any, furniture or other nearby objects. But how much detail did you sample from these surfaces? The answer is: not much.

Your visual system, for example, is your richest sense, but its ability to collect data about the real world is bad. If you hold your thumb out at arm’s length and look at it, the area of your visual field in which you see well is about the size of your thumbnail. That’s about 0.1% (1/1000) of your entire visual field. 

Everywhere else you are legally blind. You see so very little in high resolution that you must make eye movements to jump this valuable piece of real estate (your central vision) around the world, just to understand your environment. And you only make one to three eye movements each second. So, it follows that when you walked into your bedroom last night to sleep, you made about 10 or fewer eye movements in the three seconds it took you to get from the door to the bed. You thus saw less than 1% of your bedroom with high-quality vision. But you didn’t fall. You didn’t walk into the wall. Your head hit the pillow on target and not the foot of the bed. How is this feat possible given how little you could actually see?

The answer is that your brain invents other 99% of the world, continuously and in real-time throughout your entire life, based on your models and presumptions about the world. That’s what consciousness is: it’s a simulation of reality created by our brains, which makes us feel like we know where we are. Your brain is so good at making stuff up that it can do it with VR gear instead of reality. Or by watching a movie or TV. Or by listening to a radio narrative. Or even by reading a book. Your imagination bestows you with the ability to consciously become aware of objects and sensations that don’t truly exist. That’s what your imagination is for—from a biologist’s evolutionary perspective. As critical a part of the visual system as the lens of your eye.

That’s how VR can be so immersive, despite that it is literally created by holding a phone camera to your face and viewing it through a pair of thick spectacles: that’s more-or-less as much information as we get from reality anyway. The world is so complex that your brain must take shortcuts to get the hard work of surviving done before you starve to death. It must make fast decisions on little data, relying on probability to make decisions about how you live. Because of this, it is a statistical certainty that some of your brain’s presumptions are wrong at least some of the time. In a perfect word, we would not trust them at all: but the veridical world, alas, is truly virtual, in its reality.

All of this means that VR can be used not just for entertainment to turn you into Wonder Woman of the Justice League, or Luke Skywalker in a galaxy far, far away. It can also help conduct some pretty cool and very well-controlled scientific experiments that would not be possible in the real world, such as change your race and see how you react.

In this experiment, the Slater Lab used all white participants, who wore VR motion-capture suits that tracked their full body motion in real-time. These suits don’t cause skin sensations on the participants, but instead, allowed the computer to follow their body posture so that their computer avatars trailed precisely. This method reportedly creates the strong feeling of presence within a virtual body. The scientists put the participants in an environment where they interacted with an avatar interlocutor while they both stood in front of a wall-sized mirror. So the participants could see themselves and their partner as they moved in real life, which further heightened the feeling of presence.

Participants unconsciously mimicked the body posture of their interlocutors, but did so more when they shared the same race. What is especially interesting is that the true race of the participant did not matter: it’s as if they felt akin to their partner when they shared the same skin, even thoug they had been a different race for their entire life.







Courtesy of Mel Slater

Panel A: A participant in her full-body motion capture suit and VR headset. 
Panel B: the participant from Panel A’s virtual experience. 
Panel C and D: The participant interacts with a virtual character in a VR world while scientists analyze her body and mimickry behaviors. 

Previous studies had pursued questions of race, mostly using questionnaires and the veritable Implicit Association Test (in which the reaction time to questions is used to quantify the level of cognitive dissonance participants experience between pairs of potentially racially-charged ideas). But these previous tests are subject to experimental difficulties because participants can attempt to modify their answers if they realize what the test is about. A major advantage to this new paradigm is that mimickry is unconscious, and the participant can be performing some unrelated task during the experiment.

The findings indicate that we use our congruency with people around us to determine whether we are members of the same group. We identify more with, and are more likely to feel like we can trust, people in our “in-group” than in the “out-group.” In-groups and out-groups are not determined solely by race, and friendships can break down race and social barriers, allowing us to bring each other into the fold irrespective of skin-deep differences or similarities. The importance of this study is that it shows that how we see others in relation to ourselves affects our behavior automatically, unconsciously, and above all, unreasonably.


----------



## ae1905

theatlantic.com *The Mussels That Eat Oil*

Ed Yong
[HR][/HR] In 2004, a team of geologists discovered something extraordinary while exploring the Gulf of Mexico. They were searching for sites where oil and gas seep out of the ocean floor, but instead, two miles below the ocean’s surface, they found a field of dormant black volcanoes. And unlike typical volcanoes that spew out molten rock, these had once belched asphalt. They looked like they had been fashioned from the same stuff used to pave highways, because that’s exactly what they were. The team named one of them Chapapote after the Aztec word for “tar.”

Even if the volcanoes aren’t erupting any longer, a world of asphalt seems like a particularly inhospitable environment. And yet, the team found that life flourished on the volcanoes. Tubeworms, sea lilies, and corals had embedded themselves among the asphalt. Clams and mussels were thriving among sediments that were slick with oil. Crabs scuttled over them, while fish swam past. Life, as they say, finds a way, even when that way involves growing on tar.

Many of these animals likely flourished by forming partnerships with microbes, which use chemicals like hydrogen sulfide and methane to make their own food. This way of life, known as chemosynthesis, is the oldest on the planet. It allows bacteria to thrive in deep-sea habitats that are untouched by sunlight and choked by toxic chemicals. And it allows animals to colonize those same worlds by relying on the bacteria for their nutrition. 

Nicole Dubilier, from the Max-Planck Institute for Marine Microbiology, has spent much of her career studying chemosynthetic microbes and their animal hosts. She has now visited Chapapote and the asphalt volcanoes twice. “When the submersible comes up, it reeks of petroleum, and it’s filthy. We have to clean it with WD-40; it’s the only thing that works,” she says. “It’s shocking that animals can tolerate these conditions.”

In 2006, Dubilier collected two of the yellow mussels that grow on the vents. In their gills, she found not just the usual chemosynthetic microbes, but also a group of bacteria called _Cycloclasticus_. These are oil-eaters. When the Deepwater Horizon rig exploded in 2010, releasing 750 million liters of crude oil into the Gulf, _Cycloclasticus _were among the microbes that showed up to digest the slick. Their presence suggested that the mussels could indirectly be digesting the oil and gas that regularly seep out of the volcano fields.

To confirm this idea, Dubilier returned to the site in 2015 and collected more mussels. Her colleague Maxim Rubin-Blum exposed them to naphthalene—a petroleum-derived chemical. And the mussels, to his surprise, did nothing. They were not digesting the naphthalene at all.

“Max nearly knocked himself out trying to get the experiments to work,” Dubilier says.

“It’s like they’ve evolved to live off cake.”Rubin-Blum worked out what was going on by sequencing the genomes of the mussels’ microbes. When _Cycloclasticus _grows on oil, independent of the asphalt-volcano mussels, it attacks a group of chemicals called polycyclic aromatic hydrocarbons (PAHs), of which naphthalene is a member. These are usually very hard to break down because they contain tough ring-shaped chemical bonds, but _Cycloclasticus _has a large toolbox of genes that can tear these bonds apart. (Their name comes from the Greek for “ring” and the Latin for “breaker”.) 

But Rubin-Blum found that the _Cycloclasticus _strains in the mussels have lost these PAH-breaking genes. Instead, they dine on chemicals in oil like methane, ethane, propane, and other alkanes, which are simpler in structure, and take less energy to digest.

“It’s a jaw-dropping finding,” says Mandy Joye from the University of Georgia, who studies the microbes that bloom at oil spills. Those strains were thought to focus on PAHs. But Dubilier found that several of the genes that the mussel-bound microbes use to digest alkanes were also present in the _Cycloclasticus_ strains that showed up at Deepwater Horizon. This suggests that free-living microbes have much broader range of oil-digesting strategies than previously assumed.

In open water, Dubilier thinks that microbes break down alkanes very quickly, forcing _Cycloclasticus _to focus on the tougher PAHs. But the mussels provide the microbes with a constant supply of alkanes, by continuously pumping oil-contaminated water over their gills. In this cossetted world, with a conveyor belt of snacks and no competitors, _Cycloclasticus _has effectively become domesticated. It lost the ability to digest PAHs and adapted to a more abundant and considerably easier source of food. “It’s like they’ve evolved to live off cake,” says Dubilier.

“It’s all about food,” says Colleen Cavanaugh from Harvard University, who first discovered chemosynthetic microbes in the 1980s. The microbes get a regular delivery of fast food from their hosts, and the mussels live off the byproducts of their partners’ digestive work. “This allows the partners to colonize and thrive in the deep-sea—an otherwise inhospitable environment due to the lack of food.”

Dubilier notes that the mussels she studied on volcanoes evolved from shallow-water relatives around 15 million years ago. And there are more than 50 related species that have all colonized inhospitable environments like hydrothermal vents and asphalt volcanoes by teaming up with microbes. “They’re like Darwin’s finches,” she says.


----------



## ae1905

*Creating a Universe in the Lab? The Idea Is No Joke*

blogs.discovermagazine.com 

[HR][/HR] _(Credit: Shutterstock)_

Physicists aren’t often reprimanded for using risqué humor in their academic writings, but in 1991 that is exactly what happened to the cosmologist Andrei Linde at Stanford University. He had submitted a draft article entitled ‘Hard Art of the Universe Creation’ to the journal _Nuclear Physics B_. In it, he outlined the possibility of creating a universe in a laboratory: a whole new cosmos that might one day evolve its own stars, planets and intelligent life. Near the end, Linde made a seemingly flippant suggestion that our Universe itself might have been knocked together by an alien ‘physicist hacker’. The paper’s referees objected to this ‘dirty joke’; religious people might be offended that scientists were aiming to steal the feat of universe-making out of the hands of God, they worried. Linde changed the paper’s title and abstract but held firm over the line that our Universe could have been made by an alien scientist. ‘I am not so sure that this is just a joke,’ he told me.

Fast-forward a quarter of a century, and the notion of universe-making – or ‘cosmogenesis’ as I dub it – seems less comical than ever. I’ve travelled the world talking to physicists who take the concept seriously, and who have even sketched out rough blueprints for how humanity might one day achieve it. Linde’s referees might have been right to be concerned, but they were asking the wrong questions. The issue is not who might be offended by cosmogenesis, but what would happen if it were truly possible. How would we handle the theological implications? What moral responsibilities would come with fallible humans taking on the role of cosmic creators?

Theoretical physicists have grappled for years with related questions as part of their considerations of how our own Universe began. In the 1980s, the cosmologist Alex Vilenkin at Tufts University in Massachusetts came up with a mechanism through which the laws of quantum mechanics could have generated an inflating universe from a state in which there was no time, no space and no matter. There’s an established principle in quantum theory that pairs of particles can spontaneously, momentarily pop out of empty space. Vilenkin took this notion a step further, arguing that quantum rules could also enable a minuscule bubble of space itself to burst into being from nothing, with the impetus to then inflate to astronomical scales. Our cosmos could thus have been burped into being by the laws of physics alone. To Vilenkin, this result put an end to the question of what came before the Big Bang: nothing. Many cosmologists have made peace with the notion of a universe without a prime mover, divine or otherwise.

At the other end of the philosophical spectrum, I met with Don Page, a physicist and evangelical Christian at the University of Alberta in Canada, noted for his early collaboration with Stephen Hawking on the nature of black holes. To Page, the salient point is that God created the Universe _ex nihilo –_ from absolutely nothing. The kind of cosmogenesis envisioned by Linde, in contrast, would require physicists to cook up their cosmos in a highly technical laboratory, using a far more powerful cousin of the Large Hadron Collider near Geneva. It would also require a seed particle called a ‘monopole’ (which is hypothesized to exist by some models of physics, but has yet to be found).

The [video=youtube;kgDk-pcYTfQ]https://www.youtube.com/watch?v=kgDk-pcYTfQ"]idea[/URL] goes that if we could impart enough energy to a monopole, it will start to inflate. Rather than growing in size within our Universe, the expanding monopole would bend spacetime within the accelerator to create a tiny wormhole tunnel leading to a separate region of space. From within our lab we would see only the mouth of the wormhole; it would appear to us as a mini black hole, so small as to be utterly harmless. But if we could travel into that wormhole, we would pass through a gateway into a rapidly expanding baby universe that we had created. (A [video=youtube;kgDk-pcYTfQ]https://www.youtube.com/watch?v=kgDk-pcYTfQ"[/video[/video] (2017), current theory suggests that, once we have created a new universe, we would have little ability to control its evolution or the potential suffering of any of its residents. Wouldn’t that make us irresponsible and reckless deities? I posed the question to Eduardo Guendelman, a physicist at Ben Gurion University in Israel, who was one of the architects of the cosmogenesis model back in the 1980s. Today, Guendelman is engaged in research that could bring baby-universe-making within practical grasp. I was surprised to find that the moral issues did not cause him any discomfort. Guendelman likens scientists pondering their responsibility over making a baby universe to parents deciding whether or not to have children, knowing they will inevitably introduce them to a life filled with pain as well as joy.

Other physicists are more wary. Nobuyuki Sakai of Yamaguchi University in Japan, one of the theorists who proposed that a monopole could serve as the seed for a baby universe, admitted that cosmogenesis is a thorny issue that we should ‘worry’ about as a society in the future. But he absolved himself of any ethical concerns today. Although he is performing the calculations that could allow cosmogenesis, he notes that it will be decades before such an experiment might feasibly be realized. Ethical concerns can wait.

Many of the physicists I approached were reluctant to wade into such potential philosophical quandaries. So I turned to a philosopher, Anders Sandberg at the University of Oxford, who contemplates the moral implications of creating artificial sentient life in computer simulations. He argues that the proliferation of intelligent life, regardless of form, can be taken as something that has inherent value. In that case, cosmogenesis might actually be a moral obligation.

Looking back on my numerous conversations with scientists and philosophers on these issues, I’ve concluded that the editors at _Nuclear Physics B_ did a disservice both to physics and to theology. Their little act of censorship served only to stifle an important discussion. The real danger lies in fostering an air of hostility between the two sides, leaving scientists afraid to speak honestly about the religious and ethical consequences of their work out of concerns of professional reprisal or ridicule.

We will not be creating baby universes anytime soon, but scientists in all areas of research must feel able to freely articulate the implications of their work without concern for causing offense. Cosmogenesis is an extreme example that tests the principle. Parallel ethical issues are at stake in the more near-term prospects of creating artificial intelligence or developing new kinds of weapons, for instance. As Sandberg put it, although it is understandable that scientists shy away from philosophy, afraid of being thought weird for veering beyond their comfort zone, the unwanted result is that many of them keep quiet on things that really matter.

As I was leaving Linde’s office at Stanford, after we’d spent a day riffing on the nature of God, the cosmos and baby universes, he pointed at my notes and commented ruefully: ‘If you want to have my reputation destroyed, I guess you have enough material.’ This sentiment was echoed by a number of the scientists I had met, whether they identified as atheists, agnostics, religious or none of the above. The irony was that if they felt able to share their thoughts with each other as openly as they had with me, they would know that they weren’t alone among their colleagues in pondering some of the biggest questions of our being.









_This article was originally published at Aeon and has been republished under Creative Commons._


----------



## tryingtodobetter

Apple is considering moving into healthcare! Finally!

Apple plots medical record integration with iPhone


----------



## ae1905

*The Science of Microaggressions: It's Complicated*

blogs.scientificamerican.comScott Lilienfeld
[HR][/HR] The story of racial prejudice in the U.S. over the past several decades is a tale of good and bad news. On the mostly positive side, surveys of the American public suggest that overt prejudice—biases to which people are willing to admit—has been on the steady decline (although some data suggest an uptick following the presidential election of Barack Obama). On the negative side, prejudice, even in its ugliest forms, is far from eradicated. In the weeks preceding my writing of this column racial slurs surfaced on the gates of the home of basketball superstar LeBron James, and nooses were found hanging at museums in our nation’s capital.

What’s more, such overt prejudice might only be the tip of a massive iceberg. A number of prominent scholars have maintained that a good deal of racial bias has merely “gone underground,” assuming insidious forms such as implicit prejudice. Although the science of implicit prejudice is controversial, few researchers dispute that bigotry is at times manifested in subtle ways.

Against this backdrop, the concept of “microaggressions” has recently received a flurry of attention. Coined in 1970 by Harvard University psychiatrist Chester Pierce, the term “microggression” refers to a subtle slight or snub directed toward historically stigmatized individuals, especially minorities. The concept lay largely dormant until 2007, when an influential article by Columbia University counseling psychologist Derald Wing Sue and his co-authors brought it to the attention of a mainstream academic audience. According to Sue and his collaborators, the toxicity of microaggressions stems largely from their ambiguity.

Victims of these often largely invisible but pernicious statements and actions find themselves trapped in a catch-22. If they ignore microaggressions directed their way, they risk becoming the target of future transgressions from the same “perpetrators,” the term commonly used in the microaggression literature to refer to people who regularly emit such statements and actions. In contrast, if victims accuse perpetrators of aggressing against them, they risk being accused of hypersensitivity and even paranoia.

Sue and co-authors further contend that each microaggression carries a single implicit message. For example, according to them, the microaggression “America is a melting pot” communicates that minority individuals should assimilate into the broader culture, and the microaggression “You are so articulate” communicates that most minority individuals are inarticulate. When encountered frequently over long stretches of time, Sue and colleagues argue, microaggressions exert a detrimental impact on recipients, contributing to low self-esteem and, in some cases, clinical levels of depression, anxiety and other mental health problems.

Over the past few years, the term “microaggression” has become widely used on college and university campuses as well as in scores of businesses. Hundreds of institutions of higher learning now distribute standard lists of microaggressions, many of them modeled after a compendium that appeared in Sue and colleagues’ 2007 article, and warn faculty and staff members to steer clear of them.

Many colleges and universities have also instituted training programs to educate faculty staff about the hazards of microaggressions; these programs have caught on in numerous corporations, too. And several Facebook pages are dedicated to the reporting of microaggressions. Not surprisingly, in 2015 the Global Language Monitor dubbed “microaggression” its word of the year.

To be clear, Sue and his collaborators deserve credit for helping to bring the prevalence and importance of subtle prejudice to wider public and academic attention. But is the microaggression concept grounded in solid science, and is it helpful? It is certainly possible microaggressions can in some cases be harmful, especially when people are exposed to them repeatedly for years, and we need to study this phenomenon better so that we can encourage difficult conversations, not squelch them. Nevertheless, given the provisional state of the literature, it is all but impossible to determine the degree to which people’s reactions to microaggressions are attributable to these stimuli themselves as opposed to people’s subjective reactions to them.

In an article published earlier this year in the Association for Psychological Science journal _Perspectives on Psychological Science_, I canvassed the extant literature to address these questions. In general, I found that the sizable program of research dedicated to microaggressions raises far more questions than answers, and is far too preliminary to justify real-world applications, including training programs.

For one thing, microaggressions as Sue and others conceptualize them lie entirely in the eyes of the beholder. Therefore, if a person feels “microaggressed” against, he or she is automatically deemed to be the victim of a microaggression. The problems here are twofold: First, if person A is offended by a statement but person B is not, this would mean it both is and is not a microaggression, a proposition that is patently illogical. Second, science hinges on the ability to corroborate findings using converging sources of evidence. If a concept is entirely subjective, it is exceedingly difficult to study it scientifically, let alone subject it to rigorous tests.

Further, the very term “microaggression” is fraught, as it implies that the person emitting microaggressions is behaving aggressively. Yet, Sue and his collaborators admit most microaggressions are inadvertent. This acknowledgement leads to another logical contradiction, because psychologists almost invariably define aggression as intentional harm directed toward another person (or animal). The worry here is more than semantic. Psychological research demonstrates that if person A believes that person’s B’s actions are intentionally hostile, he or she is more likely to respond aggressively in turn. People’s perceived motives matter.

Like many psychological concepts, microaggression has fuzzy boundaries. That fact by itself is not troubling. Psychologists routinely conduct high-quality research on such concepts as intelligence, impulsivity and depression, none of which lends itself to a strict, dictionary-type (or what scientists call “operational”) definition.

Still, the microaggression concept is so nebulously defined that virtually any statement or action that might offend someone could fall within its capacious borders. For example, according to Sue and his colleagues, saying that “America is a land of opportunity” is a microaggression. Yet many nonprejudiced Americans, including many minorities, would surely endorse that assertion. In a study published in a major journal one researcher argued that a clinical supervisor being overly critical of a trainee counts as a microaggression; yet, he or she being insufficiently critical of this same trainee counts as a microaggression, too.

In some cases, the boundaries of the microaggression concept have become so vast as to invite ridicule. Last year an employee forum at the University of North Carolina at Chapel Hill proclaimed that telling a co-worker, “I love your shoes” or organizing golf outings with fellow colleagues were microaggressions (in the first case, because doing so might be perceived as patronizing, in the second case because doing so might be perceived as presumptuous). And a group of researchers recently labeled the phrase “God bless you” following a sneeze as a microaggression, presumably because it might offend certain nonreligious individuals. To his credit, even Sue has expressed misgivings with the increasingly indiscriminate application of the microaggression concept. Yet without considerably greater clarity regarding the definition of microaggressions, misuse and abuse of the concept seems virtually inevitable.

A final key problem is that the microaggression literature neglects to distinguish between the perceived and the perceiver. For decades, psychologists have recognized that our reactions to the world are shaped by both reality and our interpretations of it. Research demonstrates certain people are marked by consistently high levels of hostile attributional bias: a propensity to perceive aggressive intent in response to ambiguous stimuli.

The science aside, it is crucial to ask whether conceptualizing the interpersonal world in terms of microaggressions does more good than harm. The answer is “We don’t know.” Still, there are reasons for concern. Encouraging individuals to be on the lookout for subtle, in some cases barely discernible, signs of prejudice in others puts just about everyone on the defensive. Minority individuals are likely to become chronically vigilant to minor indications of potential psychological harm whereas majority individuals are likely to feel a need to walk on eggshells, closely monitoring their every word and action to avoid offending others. As a consequence, microaggression training may merely ramp up already simmering racial tensions.

None of this means the microaggression concept is useless. It might be helpful if viewed as the beginning, not as the end, of a constructive and mutually enlightening conversation between two people of differing backgrounds. If instead of saying, “You microaggressed against me. I am offended by your prejudice and you need to stop,” people were instead to say, “You probably didn’t mean to do so, and perhaps I’m taking this wrong way, but I was a bit hurt by what you said. Maybe we’re just misunderstanding each other. Let’s talk,” college campuses and businesses would be far better off. We need to encourage difficult conversations, not stifle them. 

The views expressed are those of the author(s) and are not necessarily those of Scientific American. 

Scott Lilienfeld is a professor of psychology at Emory University.


----------



## ae1905




----------



## ae1905

blogs.discovermagazine.com *Another stunner from the Juno spacecraft: Jupiter's giant cloud bands and 'String of Pearls'*
This enhanced-color image of Jupiter was created by citizen scientists Gerald Eichstädt and Seán Doran using data from the JunoCam imager on NASA’s Juno spacecraft. (Source: NASA/JPL-Caltech/SwRI/MSSS/Gerald Eichstädt /Seán Doran)

After a bit of an absence for vacation, and to finish work on a feature article on Arctic climate change and geopolitics for bioGraphic magazine, I’m back to blogging here at ImaGeo. And when I spotted this arresting image of Jupiter from the Juno spacecraft, I knew this had to be my first post since returning.
Before I get into the details, you might be wondering how images of far away planets fit in a blog dedicated in large measure to the science of our planet. That word, ‘planet,’ gets at the answer. Here at ImaGeo I frequently feature images and write stories that consider Earth from a planetary perspective — remote sensing images of storms, for example, and articles examining how the global climate is changing and how our activities as humans are contributing.

For scientists, understanding our solar system siblings, including Jupiter as well as Mars and all the others, provides insight into the origin and evolution of our own planet. This in turn can help explain why Earth alone came to host an unimaginably diverse array of life forms, including a primate species capable of asking questions about its cosmic origins — and also to fling spacecraft out to other planets in a quest to answer those questions.

So that’s why I frequently post compelling images of other planets here, including the one above. It was acquired on May 19, 2017 by NASA’s Juno spacecraft from an altitude of about 20,800 miles above Jupiter’s cloud tops.

This is not how Jupiter would appear to our eyes if we were to orbiting the planet aboard Juno. Citizen scientists Gerald Eichstädt and Seán Doran processed the raw image data from the JunoCam imager aboard Juno in a way that makes details in the giant gaseous planet’s bands of clouds really pop.

As NASA describes things:

Each of the alternating light and dark atmospheric bands in this image is wider than Earth, and each rages around Jupiter at hundreds of miles (kilometers) per hour. The lighter areas are regions where gas is rising, and the darker bands are regions where gas is sinking.
​Also prominent in the image are white ovals known as the “String of Pearls,” visible near the top. These are counterclockwise-rotating storms.

All of us are welcome to try our hands at processing raw JunoCam images. I’ve done a bit myself, and it can be a blast. Here’s an example:
A processed, closeup image of Jupiter’s swirling cloud-tops, based on raw data acquired by the Junocam instrument aboard NASA’s Juno spacecraft on May 19, 2017. (Source: Raw imagery: NASA / SwRI / MSSS. Processing: Tom Yulsman)

My goal wasn’t scientific. I wanted to produce something almost painterly — something that an artist may have created.

For more information about using raw JunoCam data to create your own images, check out Juno’s image processing pages.


----------



## ae1905

theguardian.com 
*New study confirms the oceans are warming rapidly*

John Abraham

[HR][/HR] As humans put ever more heat-trapping gases into the atmosphere, the Earth heats up. These are the basics of global warming. But where does the heat go? How much extra heat is there? And how accurate are our measurements? These are questions that climate scientists ask. If we can answer these questions, it will better help us prepare for a future with a very different climate. It will also better help us predict what that future climate will be.

The most important measurement of global warming is in the oceans. In fact, “global warming” is really “ocean warming.” If you are going to measure the changing climate of the oceans, you need to have many sensors spread out across the globe that take measurements from the ocean surface to the very depths of the waters. Importantly, you need to have measurements that span decades so a long-term trend can be established. 

These difficulties are tackled by oceanographers, and a significant advancement was presented in a paper just published in the journal Climate Dynamics. That paper, which I was fortunate to be involved with, looked at three different ocean temperature measurements made by three different groups. We found that regardless of whose data was used or where the data was gathered, the oceans are warming.

In the paper, we describe perhaps the three most important factors that affect ocean-temperature accuracy. First, sensors can have biases (they can be “hot” or “cold”), and these biases can change over time. An example of biases was identified in the 1940s. Then, many ocean temperature measurements were made using buckets that gathered water from ships. Sensors put into the buckets would give the water temperature. Then, a new temperature sensing approach started to come online where temperatures were measured using ship hull-based sensors at engine intake ports. It turns out that bucket measurements are slightly cooler than measurements made using hull sensors, which are closer to the engine of the ship.

During World War II, the British Navy cut back on its measurements (using buckets) and the US Navy expanded its measurements (using hull sensors); consequently, a sharp warming in oceans was seen in the data. But this warming was an artifact of the change from buckets to hull sensors. After the war, when the British fleet re-expanded its bucket measurements, the ocean temperatures seemed to fall a bit. Again, this was an artifact from the data collection. Other such biases and artifacts arose throughout the years as oceanographers have updated measurement equipment. If you want the true rate of ocean temperature change, you have to remove these biases.

Another source of uncertainty is related to the fact that we just don’t have sensors at all ocean locations and at all times. Some sensors, which are dropped from cargo ships, are densely located along major shipping routes. Other sensors, dropped from research vessels, are also confined to specific locations across the globe. 

Currently, we are heavily using the ARGO fleet, which contains approximately 3800 autonomous devices spread out more or less uniformly across the ocean, but these only entered service in 2005. Prior to that, temperatures measurements were not uniform in the oceans. As a consequence, scientists have to use what is called a “mapping” procedure to interpolate temperatures between temperature measurements. Sort of like filling in the gaps where no data exist. The mapping strategy used by scientists can affect the ocean temperature measurements.

Finally, temperatures are usually referenced to a baseline “climatology.” So, when we say temperatures have increased by 1 degree, it is important to say what the baseline climatology is. Have temperatures increased by 1 degree since the year 1990? Since the year 1970? Since 1900? The choice of baseline climatology really matters.

In the study, we looked at the different ways that three groups make decisions about mapping, bias, and climatology. We not only asked how much the oceans are warming, but how the warming differs for various areas (ocean basins) and various depths. We found that each ocean basin has warmed significantly. Despite this fact, there are some differences amongst the three groups. For instance, in the 300-700 meter oceans depths in the Pacific and Southern oceans, significant differences are exhibited amongst the tree groups. That said, the central fact is that regardless of how you measure, who does the measurements, when or where the measurements are taken, we are warming.

The lead author, Dr. Gonjgie Wang described the importance of the study this way:

_Our study confirms again a robust global ocean warming since 1970. However, there is substantial uncertainty in decadal scale ocean heat redistribution, which explains the contradictory results related to the ocean heat changes during the “slowdown” of global warming in recent decade. Therefore, we recommend a comprehensive evaluation in the future for the existing ocean subsurface temperature datasets. Further, an improved ocean observation network is required to monitor the ocean change: extending the observations in the boundary currents systems and deep oceans (below 2000-m) __besides maintaining the Argo network._ 
​In plain English, it will be important that we keep high-quality temperature sensors positioned throughout the oceans so in the future we will be able to predict where our climate is headed. We say in science that a measurement not made is a measurement lost forever. And there are no more important measurements than of heating of the oceans.


----------



## ae1905




----------



## ae1905




----------



## ae1905

blogs.discovermagazine.com *The Strangest (and Second-Strangest) Star in the Galaxy*
Two ways to look at Tabby’s Star: as intriguing data, or as an invitation to flights of fancy. (_Credit: Tabetha Boyajian, left; FantasyWallpapers.com, right_)

There’s an old saying: “Great discoveries don’t begin with ‘eureka!’; they begin with someone muttering, ‘That’s odd…’” I’ve long attributed the quote to the great science popularizer Isaac Asimov. Jason Wright gently corrects me. He has researched the line, he explains, and could find no evidence that Asimov ever spoke or wrote those words. It was a tidy encapsulation of what Wright is about. He is attracted to the odd side of science, and he is also a relentless sleuth.

Wright, an astronomer at Penn State, is one of the lead scientists investigating the strangely flickering object commonly known as Tabby’s Star or, in the popular press, as the “alien megastructure star.” The star’s behavior is so puzzling that Wright included among the possible explanations that a huge construction project is orbiting around it. (Note that he never suggested aliens were the best explanation, merely that the hypothesis could not yet be ruled out.) Lately Tabby’s Star has been acting up again, providing intriguing new data but, so far, still no definitive answers.

While Tabby’s Star continues to vex and excite the astronomical community, Wright is busy thinking about other puzzles as well. I was particularly intrigued by another misbehaving star, by the mouthy name of Przybylski’s Star (pronounced “jebilskee,” roughly). If Tabby’s Star is the most mysterious star in our galaxy–so described by Tabetha Boyajian, who first described it–then Przybylski’s Star may qualify as the second-most strange and mysterious star around. In this case the puzzle is the star’s composition, which appears to be filled with radioactive actinides, short-lived elements normally found only in nuclear experiments on Earth.

I’m a sucker for unexplained scientific anomalies, so I caught up with Wright to hear his thoughts on these astronomical outliers. What follows is an edited version of our conversation. [For astronomy updates, follow me on Twitter: @coreyspowell]

*Tell me about Przybylski’s Star: What is it that makes this object so unusual?
*
Its spectrum is extremely peculiar. Everyone who’s seen it says it’s the strangest stellar spectrum they’ve ever seen. It’s got an abundance pattern that is very hard to understand. In terms of exactly what abundances of what elements it’s showing us, I don’t know. Some people say there are so many lines you really can’t tell what you’re looking at. [Lines in a star’s spectrum are used to identify its chemical composition.] But, they have to be the lines of some elements! And that’s what people try to figure out.

*Some of those elements appear to be short-lived radioactive isotopes, which makes no sense; such atomic nuclei should long ago have  decayed and vanished. How can that be?*
The identification of short-lived isotopes seems like it must be wrong, because there’s no way to generate them. There is one clever way to hypothesize how they could be generated, however. [The hypothesis is that Przybylski’s Star contains as-yet undiscovered ultraheavy elements, which then decay into the short-lived byproducts we see.] It takes short-lived actinides being in there from being impossible to _not totally_ impossible. The spectroscopic evidence points toward the short-lived nucleotides really being there. Our standard model for stars implies that the observation is probably wrong. But there’s a small possibility that some really cool nuclear physics is going on in this star.

*There’s another even further-out possibility, that intelligent aliens put the radioactive elements in there as a kind of chemical signpost…
*
Arthur C. Clarke had raised the idea of looking for radioactive elements like promethium in stars as an alien signal. I found that sort of silly; why would aliens dump their promethium in stars? What would the point be? To get our attention, I guess, but I think there are easier ways to get our attention. It’s neat that somebody said, Hey we should look for it, and then somebody else found something that seems to match what they predicted. But I wouldn’t put alien technology on a serious list of things that might be going on with Przybylski’s Star.
Jason Wright really enjoys a cracking good astronomical puzzle. (Credit: J. Wright)

*How do you make progress understanding an object that seems to exist in a category of one?
*
Przybylski’s Star is not entirely alone. It’s the most peculiar of peculiar A stars [white-hot stars, brighter and more massive than our sun]. Other A stars are also peculiar, so it does seem to be part of a family. Like Tabby’s Star, it’s telling us that there are some phenomena out there that we just don’t have a good handle on yet. It means we’re missing something important somewhere, and that’s neat. When you find the anomalies, whatever the answer is it’s interesting.

There are a lot of anomalies that get swept aside with a ‘who cares?’ There’s a triple-star system found by the Kepler space telescope. At least, the pattern of light dips looks a lot like it’s a triple star system of some kind, but no one’s been able to figure out what the three stars are doing. It’s called “the impossible triple.” People just throw their hands up and say, we’re just not clever enough to figure out exactly what we’re seeing. It’s probably just something ordinary we haven’t thought of yet.

The peculiar A stars felt like that to me for a while: curious and but who know why. But if the island of stability [hypothetical ultra-heavy, quasi-stable elements] might be involved, that feels like a high enough reward that it’s worth investigating. Nobody has yet figured out how to generate those elements on Earth. If they are just sitting there in Przybylski’s Star, presumably we could study them there, in a way that’s hard to study them in the lab. These stars turn out to be a natural laboratory for something really interesting. Or it could all be a mistake!

*And what about the more famous mystery star, Tabby’s Star? What’s the latest there?
*
Telescopes are taking data of the star every day; Tabby [Boyajian] is really in charge of that. Everybody is observing, including AAVSO and the Fairborne Observatory. We just want to catch more of those 10 percent dimming events, and when it happens gather lots and lots of spectra of it as it dips. [Latest updates here.]


We’re looking for spectral changes: When the star is getting dimmer, is that because something is in front of it? If it’s because something’s in front of the star, we want spectra in case it’s absorbing certain wavelengths—that would tell us what it’s made of—and we want to know how much dimmer it is in different wavelengths over time. SWIFT can measure the ultraviolet emission, Las Cumbres and other ground-based observatories can measure optical, Spitzer can measure infrared. Put all that together and we should be able to put a picture together of what we’re looking at!

*Everyone wants to know what is making the star flicker. Comets, gas cloud, black hole, aliens. How will we tell?*
If diffuse material is passing in front of the star, then it will leave spectral fingerprints [as the light goes through]. We’ll take spectra and look for the usual suspects: hydrogen, sodium, magnesium, the elements that have the strongest fingerprints that we expect to see. If it’s dust, we expect the dips to be deeper in the blue than they are in the red. In blue light or ultraviolet light it will get very dim, and in red or infrared it will hardly get dimmer at all.

If the star gets dimmer the same at all wavelengths, that means we are looking through something very optically thick, meaning that light doesn’t penetrate at any wavelengths. That could be a thick disk with some well-defined edge where we don’t see any diffuse part of it blocking the star. If it’s the star itself getting dimmer, then we’ll expect to see changes in the spectral features of the star as it gets cooler or smaller or whatever it is doing.

*What if you don’t see any obvious spectral changes?*
Hmmm…I don’t know! If Tabby’s star just gets dimmer but the spectrum doesn’t change, that would be quite a puzzle! You’d have to think of a very dense annulus or something. Right now we’re waiting to see where the data take us. From a SETI perspective, if the spectrum didn’t change but the whole thing got dimmer, that would suggest a large opaque object. But that still wouldn’t prove it.

*You drew up a provocative list of potential explanations for Tabby’s star. How do you evaluate what is plausible and implausible when you are dealing with such an unusual object?
*
We admitted that our list was subjective. You can only do it subjectively. We were trying to stay open minded. One way to do it is to count unicorns. How many unicorns do you have to invoke [to make your explanation work]? There are all those horses out there, and maybe one is a unicorn. Maybe you are allowed one. But once you invoke three of them, your answer becomes so contrived that you look for other explanations. Basically we were counting unicorns. For each explanation, how many crazy things would have to be tricking us?

*What draws you to scientific oddballs like Tabby’s Star and Przybylski’s Star?*
I like puzzles that not a lot of people are working on. Peculiar A stars were once a hot topic, but there’s a whole younger generation of astronomers who’ve never even heard about them. I thought it was neat to pick it up again and show just how bizarre these things are, and hopefully get some people thinking about it. And indeed, I’ve discovered a few younger astronomers who have recently rediscovered them and are working on them. That’s neat.


----------



## ae1905

washingtonpost.com *Why monkeys can't talk is a heated question among evolutionary scientists*

By Ben Guarino
[HR][/HR] Decades ago, while Philip H. Lieberman was soaking in a bathtub and listening to the radio, he heard anthropologist Loren Eiseley ponder an evolutionary puzzle: Why couldn't monkeys talk? Like us, they're social primates, intelligent and certainly not quiet. Rhesus macaques grunt, coo, screech and scream. Infant macaques make sounds known as geckers. Despite the grunting and geckering, though, no other primates — not even the chimpanzees and bonobos, our nearest ape relatives — can make the vowel and consonant sounds we know as speech.

Scientists figured there were two likely sticking points. Either the brain was not wired for speech in nonhuman primates, or their windpipes were shaped the wrong way.

Lieberman, a professor emeritus of anthropology at Brown University in Rhode Island, got out of the tub and took the puzzle with him. In groundbreaking experiments with rhesus macaques in the late 1960s and early 1970s, Lieberman and his colleagues pinned the problem to monkey throats. They concluded that macaques lacked a sufficient supralaryngeal vocal tract, the space in humans that begins in the mouth and follows the hump of the tongue into the throat. Even if a monkey brain had the correct wiring for speech, the monkey vocal tract simply couldn't produce adequate sounds to talk.

This vocal tract explanation caught on, appearing in textbooks and even a science comic book. “Among experts in the evolution of speech, this idea was common but not widespread,” said W. Tecumseh Fitch, a professor of cognitive biology at the University of Vienna, in an email. “But among biologists, anthropologists, and psychologists it was, and still is, very widespread.”

Recently, what seems like a whimsical question — Why can't monkeys talk? — has turned into a serious and heated debate among former collaborators. In December, Fitch and colleagues published a paper in Science Advances that announced its counterpunch in the title: “Monkey vocal tracts are speech-ready.” On Friday, a pair of reports published in the same journal advanced the match to rounds three and four, with Lieberman's reply and a response to his reply by Fitch and the other authors of the December study.

For Lieberman's earlier study of rhesus macaques, published in 1969 in the journal Science, “We took a rhesus monkey and the started to see what the anatomical limits were,” he said. The researchers made a plaster cast of a monkey throat from a macaque that died naturally. With a live but sedated monkey, the researchers manipulated the animal's tongue and documented the positions it could make. Using this information, they estimated the range of monkey speech sounds. It was much smaller than a human's, and indicated that a macaque could not produce vowels, such as the long E, common to most languages.

The researchers followed this work with X-ray videos of infant humans, whose tongues resemble those of monkeys at birth but shift toward the throat as they grow. Additionally, Lieberman posited that Neanderthal vocal tracts resembled those of infant humans, too. And though Neanderthals must have been capable of limited speech, Lieberman said, they would not have spoken with the clarity of an adult human.

Since the 1980s, Lieberman has focused on primate brains; as early as 1968, he said, his work demonstrated that primates other than humans didn’t have the brainpower for complex speech.

In December 2016, a team of cognitive biologists and anthropologists replicated Lieberman's macaque research using more advanced techniques. Among the study's authors was Fitch, who once was a graduate student of Lieberman's. The key difference in the more recent work was that Fitch and his colleagues took X-ray videos of live macaques as they made noise or chewed.

From the new research, “we had much more data” than the plaster or silicone casts of decades prior, said Bart de Boer, an author of the 2016 study and an expert on the evolution of speech at the Vrije University in Brussels. “It turned out that that represented a much bigger range of possible sounds.”

Using 99 images of rhesus macaque vocal tracts from the X-rays, these researchers simulated the three-dimensional space in the monkey mouth and throat. By mapping the flow of air through this space, the scientists generated a hypothetical range of speech sounds that monkeys could produce. The “phonetic potential” of a macaque, they concluded, was eight times larger than estimated in 1969.

The scientists generated the phrase “Will you marry me?” as if spoken by macaque. The words are odd and clipped, but the authors argued it sounds as comprehensible as English with a slight “foreign accent.” (Less charitably it sounds like Gollum, from “The Lord of the Rings,” trying to speak through a chokehold.) Crucially, the models showed the macaques were capable of vowel sounds, as in “bat,” “bet,” “bit,” “but” and “bought.”

“It certainly showed that we had been underestimating the range of sounds that they can make,” de Boer said. “We should focus our research on the evolution of cognition if we really want to understand the origins of speech.”

The reaction to this research among experts studying the evolution of speech, “was actually very positive and supportive,” said de Boer. (With the exception of Lieberman, he added.) In the years leading up to this paper, he said, “the idea that cognition is more important than vocal anatomy was already gaining traction” among evolutionary specialists.

At the time of the 2016 paper's publication, John Esling, a linguist at Canada's University of Victoria who was not involved with this work, told Science Magazine that, “This certainly shows that the macaque vocal tract is capable of a lot more than has previously been assumed."

Anna Barney, a University of Southampton in England biomedical acoustic engineer who also was not involved with the research, told the New York Times in December that the new research was convincing but raised questions, such as a lack of macaque consonants. “What they’ve shown is that monkeys are vowel-ready, not speech-ready.”

To Lieberman, the 2016 study was not a refutation but a confirmation. “There’s nothing really new in the Fitch paper except a series of misrepresentations with the intent to deceive,” he said. What he called “monkey-speech” still could not produce the long E vowel sound, as in s_ee_ or b_ee_t. “They still have no ‘ee.’ They barely have an approximation of an ‘oo’ and an ‘aa.’ ”

The long E has special status among the vowels, he said — in fact, research indicates it is possible to estimate the size of a person based on the sound of her or his long E. It is the most easily identified vowel sound, he said. There's really only one way for our vocal tracts to make it, he said, but “for virtually any other vowel there are different ways to skin the cat.”

To produce a long E, humans pull their tongues upward and forward, creating a large back cavity above the larynx while constricting the forward area, creating a 10-to-one difference in volume from back to front. “You can’t do this with a nonhuman primate. The tongue is almost flat in a monkey,” he said. “You’d have to take a knife to cut the tongue in half. It’s simply impossible.”

Listening to someone speak a long E “is an optimal signal that allows listeners to estimate the length of speakers' vocal tracts — a necessary step in determining what a person intended to say,” he said.

De Boer acknowledged that the long E was special to humans as it “sits in a corner of the [human] acoustic space.” But, he countered, apes might have a different acoustic corner that helps estimate size. “The fact that macaques cannot produce 'ee' is therefore a bit of a red herring,” he said.

“The importance of the vowel remains debated,” Fitch said. “Lieberman argues it is important and we are unconvinced.” In fact, Fitch and de Boer described the long E as “mythical” in their latest report, a phrase that Lieberman found “offensive and demeaning” and likened to a “Trump tweet.”

Barney, in an email to The Washington Post said, "My own view regarding [long E] is in agreement with theirs – it is not in a unique category in terms of determining the size of an individual’s vocal tract."

In his new comment on Fitch and colleagues' paper, Lieberman cited Charles Darwin, who reflected on the perilous position of our trachea in 1859: Food has to pass over our windpipe. Other primates do not have this arrangement. This improves our range of sounds, Lieberman said, but the opening also increases the chances we will choke on food. In his view, the selective advantage of speech explains this choking hazard. (That humans are more likely to choke on food than other primates is an intuitive idea but remains unproven, Fitch said.)

“We are not saying that the human vocal tract is not fine-tuned for speech,” de Boer said. He said he believed it is, though noted that “there is still a lively debate about this.” But, he said, their point was that a monkeylike vocal tract does not necessarily preclude primitive speech. “Speech could therefore have started to evolve even in a situation where our ancestors still had a monkeylike vocal tract.”

Despite decades of observation, however, monkeys have not yet begun to speak.

*Read more:*
These monkeys are creating tools thought to be unique to humans — by accident
Orangutan granted rights of personhood in Argentina
Think your dog talks like people? Scientists say you might just be right.


----------



## ae1905




----------



## ae1905




----------



## ae1905

By Hannah Lang 11 July 2017

*Plants Can Turn Caterpillars Into Cannibals to Avoid Getting Eaten*

In order to protect themselves from hungry herbivores, plants release a defence mechanism that makes them taste foul.









Some plants have been found to use nature’s dog-eat-dog world to their advantage, forcing herbivores to become cannibals when the plants feel threatened by a caterpillar’s endless appetite.

A new study published in the Nature Ecology and Evolution journal found that when some plants are under attack from hungry herbivores, they emit defences that make themselves incredibly foul-tasting to caterpillars, which spurs the caterpillars to eat each other.

“Plants can defend themselves so much that they food-stress the herbivore, and then the herbivores determine that rather than have plants on their menu, they should have caterpillars at the top of their menu,” said John Orrock, the author of the study and a researcher in the Department of Zoology at the University of Wisconsin, Madison.

*PLANTING THE SEEDS*

Orrock and his research team sprayed tomato plants with methyl jasmonate—a substance plants produce in response to environmental stresses—to trigger the plants’ defense mechanisms. This chemical allowed the plant to change its chemistry, which made it less appetising to the beet armyworm caterpillars that were placed on a treated plant.

This phenomenon has been documented in a variety of plants, and research has suggested that plants can sense when surrounding plants are under attack, which can spur the production of methyl jasmonate in entire communities of plants. 

What I find most interesting is that general idea that most plants use information from their environment and they use that information to effectively allocate their resources either into defense or into something else,” said Orrock.

And methyl jasmonate can do more than just make a plant taste bad.

“These chemicals can attract natural enemies like predators and parasitoids that will eat herbivores,” said Orrock.

*CANNIBAL CATERPILLARS*

When caterpillars find that the plant they’re munching on is no longer tasty, they are faced with a choice that Orrock said becomes simple.

“You can either eat this plant or you can turn on your comrades,” he said. “The choice is clear.”

The research team looked at the growth rate of the caterpillars and found that the caterpillars that consumed a plant diet and the caterpillars that became cannibals grew at the same rate, meaning they were able to compensate for their low-quality plant diet. (Read more about how caterpillars use vibrations to communicate.)

“It becomes a cost benefit analysis for the caterpillar, the fact that the plant material becomes so low quality that in the interest of maintaining and even sustaining metabolism, the caterpillar needs to find the highest quality food resource it can around it,” said Brian Connolly, a post-doctoral researcher who also worked on the study.

The larger caterpillars tend to prey on the smaller caterpillars, following the philosophy “eat or be eaten,” Orrock said.

*MOVING FORWARD*

In this experiment, the caterpillars weren’t given the option to try another plant before resorting to cannibalism, but Orrock and Connolly are conducting research in larger settings where the caterpillars are given the choice.

However, in the new larger setting, the cannibalism trends appear to be similar.

“Even with the capacity to disperse a little bit further and especially escape your hungry buddies, they do end up consuming each other with sort of the same patterns,” said Connolly.

The reason why the caterpillars are so quick to eat their own if they could just as easily move to a different plant isn't clear, but Orrock and Connolly hope to find out.

"As you can imagine, this is something that we’ve sort of switched gears and has become a top priority so we’re in the process of actually evaluating that data right now," said Connolly.


----------



## ae1905

ae1905 said:


>



scientificamerican.com *What Does the Antarctic Ice Shelf Break Really Mean?*

Adrian Luckman,The Conversation US

[HR][/HR] _The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.
_
One of the largest icebergs ever recorded has just broken away from the Larsen C Ice Shelf in Antarctica. Over the past few years I’ve led a team that has been studying this ice shelf and monitoring change. We spent many weeks camped on the ice investigating melt ponds and their impact—and struggling to avoid sunburn thanks to the thin ozone layer. Our main approach, however, is to use satellites to keep an eye on things.

We’ve been surprised by the level of interest in what may simply be a rare but natural occurrence. Because, despite the media and public fascination, the Larsen C rift and iceberg “calving” is not a warning of imminent sea level rise, and any link to climate change is far from straightforward. This event is, however, a spectacular episode in the recent history of Antarctica’s ice shelves, involving forces beyond the human scale, in a place where few of us have been, and one which will fundamentally change the geography of this region.

Ice shelves are found where glaciers meet the ocean and the climate is cold enough to sustain the ice as it goes afloat. Located mostly around Antarctica, these floating platforms of ice a few hundred meters thick form natural barriers which slow the flow of glaciers into the ocean and thereby regulate sea level rise. In a warming world, ice shelves are of particular scientific interest because they are susceptible both to atmospheric warming from above and ocean warming from below.

Back in the 1890s, a Norwegian explorer named Carl Anton Larsen sailed south down the Antarctic Peninsula, a 1,000km long branch of the continent that points towards South America. Along the east coast he discovered the huge ice shelf which took his name.

For the following century, the shelf, or what we now know to be a set of distinct shelves—Larsen A, B, C and D—remained fairly stable. However the sudden disintegrations of Larsen A and B in 1995 and 2002 respectively, and the ongoing speed-up of glaciers which fed them, focused scientific interest on their much larger neighbour, Larsen C, the fourth biggest ice shelf in Antarctica.


Some great aerial footage from @BAS_News of the rift on Larsen C! pic.twitter.com/aXyCx9QTzX
— Project MIDAS (@MIDASOnIce) February 21, 2017

​This is why colleagues and I set out in 2014 to study the role of surface melt on the stability of this ice shelf. Not long into the project, the discovery by our colleague, Daniela Jansen, of a rift growing rapidly through Larsen C, immediately gave us something equally significant to investigate.

*Nature at work*

The development of rifts and the calving of icebergs is part of the natural cycle of an ice shelf. What makes this iceberg unusual is its size—at around 5,800 km² it’s the size of a small US state. There is also the concern that what remains of Larsen C will be susceptible to the same fate as Larsen B, and collapse almost entirely.

Our work has highlighted significant similarities between the previous behaviour of Larsen B and current developments at Larsen C, and we have shown that stability may be compromised. Others, however, are confident that Larsen C will remain stable.

What is not disputed by scientists is that it will take many years to know what will happen to the remainder of Larsen C as it begins to adapt to its new shape, and as the iceberg gradually drifts away and breaks up. There will certainly be no imminent collapse, and unquestionably no direct effect on sea level because the iceberg is already afloat and displacing its own weight in seawater.

This means that, despite much speculation, we would have to look years into the future for ice from Larsen C to contribute significantly to sea level rise. In 1995 Larsen B underwent a similar calving event. However, it took a further seven years of gradual erosion of the ice-front before the ice shelf became unstable enough to collapse, and glaciers held back by it were able to speed up, and even then the collapse process may have depended on the presence of surface melt ponds.

Updated #Sentinel1 InSAR sequence shows final branching at the rift tip as it reaches within 4.5 km (2.8 miles) of breaking through to calve pic.twitter.com/6F1Bs8Zmkv
— Adrian Luckman (@adrian_luckman) July 6, 2017

​Even if the remaining part of Larsen C were to eventually collapse, many years into the future, the potential sea level rise is quite modest. Taking into account only the catchments of glaciers flowing into Larsen C, the total, even after decades, will probably be less than a centimetre.

*Is this a climate change signal?*

This event has also been widely but over-simplistically linked to climate change. This is not surprising because notable changes in the earth’s glaciers and ice sheets are normally associated with rising environmental temperatures. The collapses of Larsen A and B have previously been linked to regional warming, and the iceberg calving will leave Larsen C at its most retreated position in records going back over a hundred years.

However, in satellite images from the 1980s, the rift was already clearly a long-established feature, and there is no direct evidence to link its recent growth to either atmospheric warming, which is not felt deep enough within the ice shelf, or ocean warming, which is an unlikely source of change given that most of Larsen C has recently been thickening. It is probably too early to blame this event directly on human-generated climate change.

_This article was originally published on The Conversation. Read the original article._



Adrian Luckman
Adrian Luckman is a professor of glaciology and remote sensing at Swansea University.


----------



## EndsOfTheEarth

Exoskeletons help paralysed walk again...

https://www.technologyreview.com/s/546276/this-40000-robotic-exoskeleton-lets-the-paralyzed-walk/

My only questions is, why has it got shoes and not rollerblades?


----------



## ae1905

futurism.com *METI to Send Interstellar Messages in 2018*

Robert Sanders
[HR][/HR] *“Hello, Other World!”*

The need to reach out and make contact with another, complementary mind—whether in our personal lives or collectively, as a species—is a powerful human urge. And now the dream—no matter how remote—of opening up a line of communications with a nonhuman intelligence is entering a new, more systematic phase.

A San Francisco-based organization is now preparing to send continuous messages to those nearby planets thought most likely to harbor alien life. Messaging Extra Terrestrial Intelligence (METI) has already set its sights on dispatching coded signals to a rocky planet that orbits Proxima Centauri, the nearest star to our Sun, by 2018.

Douglas Vakoch, the former director of Interstellar Message Composition at the Search for Extraterrestrial Intelligence (SETI) Institute and president of METI, believes that passively scanning the cosmos for messages betraying the existence of intelligent life elsewhere is simply not enough. In an interview with Forbes, he explains: “It’s too late to conceal ourselves in the universe, so we should decide how we want to represent ourselves. Extraterrestrials may be waiting for a clear indication from us that we’re ready to start talking.”

According to Phys.org, the program involves beaming deliberate, repeated signals into space over long periods of time toward suspect stars and planets—and by “suspect,” we mean possessing a configuration approximately similar to our own Solar System and planet. As far as the signals are concerned, scientists plan to send information that transcends language, such as mathematical proofs or scientific concepts. The message is planned to be just a simple “hello,” but the rather significant problem of how it is to be constructed still remains.







The Arecibo message of 1974. *A Shot in the Dark*

METI’s efforts are merely the latest iteration of a long tradition of attempts to more actively communicate with potential alien neighbors. The most famous instance is the “Arecibo Message,” which was dispatched in the general direction of the globular cluster M31 in 1974. The binary signal, which was encoded in radio waves, contained pictorial representations of humanity, formulae for the elements and compounds that make up DNA, and representations of the Solar System and the Arecibo transmitter—a remarkably information-dense message that was crafted by Frank Drake and Carl Sagan, both leading physicists of the time.

Other attempts at interstellar communication took a more pedestrian approach—something like a cosmic “message in a bottle.” The first of these involved the creation of the Pioneer Plaques—gold tablets with representations of humans, hydrogen, and a diagram of the Solar System—which were bolted to the frames of the Pioneer 10 and Pioneer 11 probes in the early 70s. Since it was known beforehand that the eventual trajectory of the probes would carry them out of the Solar System altogether, it was sensibly concluded that—although the very definition of the proverbial long-shot—attaching a message for any possible extraterrestrials who might happen upon the probes in the distant future was worth the limited investment in time and money.

Similar efforts include the famous Voyager Golden Records, which are simply low-tech phonograph records (that’s 1890s technology, folks) with a playback stylus and instructions for how to play the thing. More recent attempts include The Last Pictures—a micro-etched disc carried by a geostationary satellite—and a myriad of radio messages.

For better or worse, we’ve had no replies. But that won’t stop the dreamers from trying.







The Voyager “Golden Record.” Credit: NASA


----------



## ae1905

more on the meti story:


phys.org *SETI scientists say it's time to send messages to ET*

February 14, 2015 by Nancy Owano weblog 
   
This is the "South Pillar" region of the star-forming region called the Carina Nebula. Like cracking open a watermelon and finding its seeds, the infrared telescope "busted open" this murky cloud to reveal star embryos tucked inside finger-like pillars of thick dust. Credit: NASA 

Scientists want to contact extraterrestrial civilizations. Some applause the effort. Others say this is not a good plan at all. The idea is for messages encoded in radio signals to be sent repeatedly for hundreds of years to planets in habitable zones around stars, said a report in _The Guardian_. Repeated signals would be beamed at nearby planets that were chosen for their odds of harboring life. The scientists are from the Search for Extraterrestrial Intelligence (SETI) Institute in California. 

The BBC said that SETI's researchers have been listening for signals from outer space for more than 30 years using radio telescope facilities. So far there has been no sign. Writing in _ScienceInsider_, Eric Hand said, "Since the SETI movement began in the 1960s, it has mostly involved using radio telescopes to listen to bands in the electromagnetic spectrum for something out of the ordinary." Seth Shostak, director of the SETI Institute, believes that it is time to step up the search from listening to broadcasting. "Some of us at the institute are interested in 'active Seti', not just listening but broadcasting something to some nearby stars because maybe there is some chance that if you wake somebody up you'll get a response," he told BBC News. He proposed beaming the entire contents of the Internet, said _The Guardian_. Shostak communicated his views at the American Association for the Advancement of Science meeting in San Jose on Thursday. Hand said scientists in both camps, for and against beaming, participated in the Thursday debate.

Among those against the idea in the past has been scientist Stephen Hawking. In 2010, T_he Guardian_ reported that Hawking believed we would be well-advised "to keep the volume down on our intergalactic chatter," according to Leo Hickman. Should earthlings draw attention to ourselves? Should we be yelling into space? Are we looking at a risk of aggression and even annihilation? _The Guardian_ said Hawking had warned that "an encounter with more advanced ETs could go badly for humans." Ian Sample, science editor, _The Guardian_, said others agree. "Simon Conway Morris, an evolutionary paleobiologist at Cambridge, has urged governments to prepare for the worst because aliens might be as violent and greedy as humans – or worse." Shostak and others, meanwhile, posed their views in favor of beaming messages. According to _The Guardian_, Douglas Vakoch, SETI Institute director of interstellar message composition, said, "We have already yelled 'Yoo Hoo!' We now want to follow up with something with a little more substance." 

The BBC said the current plan was for leading astronomers, anthropologists and social scientists to gather at SETI after the AAAS meeting for a symposium. The Institute on the SETI website stated they recognize the need for further discussions. They said that "the topic of active transmissions towards potential extraterrestrial technological civilizations is not just a scientific matter, but also one with policy, diplomatic, regulatory, and cultural ramifications. It is a topic about which people have strong views, and a topic that needs to be discussed broadly."

David Brin, a scientist and science fiction writer spoke at the AAAS meeting. "Historians will tell you that first contact between industrial civilizations and indigenous people does not go well," Brin told Pallab Ghosh, science correspondent, BBC News.


----------



## ae1905

^

if scientists send a video of earth's notional leader, donald trump, I'm confident aliens would lose any interest in visiting us


----------



## ae1905

NASA posted Cassini's final photos before killing the space probe - Business Insider


----------



## ae1905

*IBM's simulated molecule could lead to drug and energy advances*
A quantum computer has simulated the largest molecule to date.

Rachel England, @Rachel_england

IBM's quantum computer has made a small advance that could ultimately lead to a major chemistry breakthrough. A team of IBM researchers has successfully used IBM Q to accurately simulate the molecular structure of beryllium hydride (BeH2), the largest molecule ever to be simulated by a quantum computer to date. This is pretty important, because simulating any molecule on a quantum level is no easy task, never mind a big one.

The point of simulating molecules is to determine how they will interact with other compounds, so researchers can create safe and effective chemical models for things like medicine and batteries. To do this, scientists need to find the molecule's most stable configuration, known as its ground state. In theory, this should be straightforward enough, but to truly understand the molecule's behavior, you have to figure out how each electron in each atom will interact with all the other atom's nuclei, including the unusual quantum effects that take place on such small scales.

So there's a lot going on, and today's computers can quickly become overwhelmed with the magnitude of options and outcomes, leaving chemical modellers to make approximations about how a molecule might behave and then test their theories -- which can be time-consuming -- in the real world. So the IBM team demonstrating that its seven-qubit chip is up to the job by calculating the ground state of BeH2 has significant connotations for the future of chemistry as we know it. As the team outline on the IBM blog, simulating chemical reactions accurately is an important step in discovering new drugs, fertilizers and even new sustainable energy sources. It's big news on a small scale.


----------



## ae1905




----------



## Sherwood Forest

Ecstasy Was Just Labelled a 'Breakthrough Therapy' For PTSD by The FDA


----------



## ae1905

blogs.discovermagazine.com *

Intravaginal Tunes and Didgeridoos: Your 2017 Ig Nobel Winners*
[HR][/HR] This man sleeps well. _(Credit Shutterstock)_

Not all science needs to be so serious. Since 1991, the Ig Nobel Prize ceremony has proven that the best scientific research can sometimes be a mix of impactful and irreverent.

Let’s check out this year’s winners, broken down by scientific category.

*Physics: “**On the Rheology of Cats**“
*Cat owners are familiar with the peculiar quality of felines to fill whatever vessel they occupy, much like a liquid. So it’s only appropriate that the field of rheology, or the branch of physics that studies the flow of matter, would study cats’ liquid properties. The study is easily worth the read for the photos alone.

If it’s a vessel, cats will find a way to fit in it. A study that examined feline fluid dynamics was among this year’s winners._ (Credit: Shutterstock)
_
*Peace: “**Didgeridoo Playing as Alternative Treatment for Obstructive Sleep Apnoea Syndrome: Randomised Controlled Trial**”
*Peace need not be on a macro scale — anyone in the proximity of a heavy snorer will attest that their peace is being pretty heavily disrupted. Fortunately, a didgeridoo instructor realized his students experienced less daytime sleepiness and snoring. Turns out, the training of the muscles in the upper airways helps diminish sleep apnea symptoms. Peace at last.

*Economics: “**Never Smile at a Crocodile: Betting on Electronic Gaming Machines is Intensified by Reptile-Induced Arousal**”
*In one of the stranger Ig Nobel winners, tourists to a crocodile farm were assigned to play a digital gambling machine. Some participants only gambled, others were told to hold a small crocodile, then gamble. The researchers’ hypothesis: Mental and emotional arousal caused by the croc may have an effect on gambling patterns. As it turns out, those who were frightened of the crocodile placed lower-than-average bets, and those who were less emotionally troubled by the reptile placed higher-than-average bets. So maybe avoid hanging out with your snake before heading to the slots.

*Anatomy: **“**Why Do Old Men Have Big Ears?**“
*You may have heard the adage that the only two parts of the body that never stop growing are your nose and ears. But is it true? This study of over 200 male participants established an average growth rate of men’s ears of about .22 millimeters a year. All the better to hear you with, my dear.

*Fluid Dynamics: “A Study on the Coffee Spilling Phenomena in the Low Impulse Regime”*
Any barista will tell you one of the greatest pleasures of the job is watching people struggle to not spill their coffee. No matter the tactic — slow-walking, steady hands, mug-staring, etc. — nothing seems to help avoid the inevitable spill. This study establishes a new and effective anti-spill method: walking backwards.

*Nutrition: “**What is for Dinner? First Report of Human Blood in the Diet of the Hairy-Legged Vampire Bat Diphylla ecaudata”
*Remember that turning point when you learned that vampire bats don’t actually consume human blood? Remember that rush of relief that your precious plasma was safe and sound? Well, turns out it’s not. This study establishes the first case of the hairy-legged vampire bat consuming human blood. Break out the garlic.
This man is no cheese-hater._ (Credit: Shutterstock)
_
*Medicine: “**The Neural Bases of Disgust for Cheese: An fMRI Study”*
Hard to imagine, I know, but some people really hate cheese. Such an inexplicable phenomenon of course warrants deeper investigation, so this study placed 15 cheese lovers and 15 cheese haters one by one into an fMRI machine. The participants were then exposed to six cheesy smells and six non-cheesy smells. The results of their brain activity seem to indicate the reward and disgust centers of the brain may be linked. Ground _brie_-king!

*Cognition: “**Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins”*
This study found that twins have a more difficult time identifying their own faces in photos than non-twins. The results bring up interesting questions of what role facial recognition and perception play in personal identity. Talk about an identity crisis.

*Obstetrics: “**Fetal Facial Expression in Response to Intravaginal Music Emission” and “Fetal Acoustic Stimulation Device”*
Forget headphones on the belly — you’re better off putting a speaker inside your vagina. This study shows that when exposed to music played inside the mother’s vagina, fetuses had a stronger response in their facial expressions. One of the scientists even developed a special speaker you can pop in for the best possible fetal jam session. Rock on, baby.


----------



## ae1905

*When Does Your Baby Become Conscious?*

By Paul GabrielsenApr. 18, 2013 , 2:00 PM

For everyone who's looked into an infant's sparkling eyes and wondered what goes on in its little fuzzy head, there's now an answer. New research shows that babies display glimmers of consciousness and memory as early as 5 months old.

For decades, neuroscientists have been searching for an unmistakable signal of consciousness in electrical brain activity. Such a sign could determine whether minimally conscious or anesthetized adults are aware—and when consciousness begins in babies.

Studies on adults show a particular pattern of brain activity: When your senses detect something, such as a moving object, the vision center of your brain activates, even if the object goes by too fast for you to notice. But if the object remains in your visual field for long enough, the signal travels from the back of the brain to the prefrontal cortex, which holds the image in your mind long enough for you to notice. Scientists see a spike in brain activity when the senses pick something up, and another signal, the "late slow wave," when the prefrontal cortex gets the message. The whole process takes less than one-third of a second.

Researchers in France wondered if such a two-step pattern might be present in infants. The team monitored infants' brain activity through caps fitted with electrodes. More than 240 babies participated, but two-thirds were too squirmy for the movement-sensitive caps. The remaining 80 (ages 5 months, 12 months, or 15 months) were shown a picture of a face on a screen for a fraction of a second.

Cognitive neuroscientist Sid Kouider of CNRS, the French national research agency, in Paris watched for swings in electrical activity, called event-related potentials (ERPs), in the babies' brains. In babies who were at least 1 year old, Kouider saw an ERP pattern similar to an adult's, but it was about three times slower. The team was surprised to see that the 5-month-olds also showed a late slow wave, although it was weaker and more drawn out than in the older babies. Kouider speculates that the late slow wave may be present in babies as young as 2 months.

This late slow wave may indicate conscious thought, Kouider and colleagues report online today in _Science_. The wave, feedback from the prefrontal cortex, suggests that the image is stored briefly in the baby's temporary "working memory." And consciousness, Kouider says, is composed of working memory.

The team displayed remarkable patience to gather data from infants, says cognitive neuroscientist Lawrence Ward of the University of British Columbia, Vancouver, in Canada, who was not involved in the study. 

However, the work, although well executed, is not the last word, he says. "I expect we'll find several different neural activity patterns to be correlated with consciousness."

Comparing infant brain waves to adult patterns is tricky, says Charles Nelson, a neuropsychologist at Harvard Medical School in Boston. "ERP components change dramatically over the first few years of life," he writes in an e-mail. "I would be reluctant to attribute the same mental operation (i.e., consciousness) in infants as in adults simply because of similar patterns of brain activity."

"He's right, the ERP components are not exactly the same as in adults," Kouider responds, but the ERP signature he saw had the same characteristics.

Kouider next hopes to explore how these signals of consciousness connect to learning, especially language development. "We make the assumption that babies are learning very quickly and that they're fully unconscious of what they learn," Kouider says. "Maybe that's not true."


----------



## ae1905

scientificamerican.com *

Researchers Unite in Quest for "Standard Model" of the Brain*

Alison Abbott,Nature magazine
[HR][/HR] Leading neuroscientists are joining forces to study the brain—in much the same way that physicists team up in mega-projects to hunt for new particles. 

The International Brain Lab (IBL), launched on September 19, combines 21 of the foremost neuroscience laboratories in the United States and Europe into a giant collaboration that will develop theories of how the brain works by focusing on a single behaviour shared by all animals: foraging. The Wellcome Trust in London, and the Simons Foundation in Washington DC have together committed more than US$13 million over five years to kick-start the IBL. 

The pilot effort is an attempt to shake up cellular neuroscience, conventionally done by individual labs studying the role of a limited number of brain circuits during simple behaviours. The ‘virtual’ IBL lab will instead ask how a mouse brain, in its entirety, generates complex behaviours in constantly changing environments that mirror natural conditions. 

The project will use chips that can record the electrical signals of thousands of neurons at once. It will also use other emerging technologies, such as optogenetics toolkits that control neurons with light. “It’s a new approach that will likely yield important new insights into brain and behaviour,” says Tobias Bonhoeffer, a director of the Max Planck Institute for Neurobiology in Martinsried, Germany, who is also a Wellcome Trust governing-board member. 

Large-scale neuroscience projects are hardly rare. In 2013, the European Commission announced the 10-year Human Brain Project, which will cost more than €1 billion ($1.1 billion); and in 2014, US president Barack Obama launched the US Brain Initiative to develop neuro-technologies, with $110 million of funding that year. The Allen Institute for Brain Science, in Seattle, Washington, has been creating comprehensive maps of brain anatomy and neural circuitry since 2003. Japan, China, Canada and other countries also have, or are planning, their own big neuroscience initiatives.

But none operates quite like the IBL, which will be governed in a similar way to large-scale physics projects such as ATLAS and CMS, at Europe’s particle-physics lab CERN, which reported evidence for the Higgs boson in 2012. The two collaborations, at CERN’s Large Hadron Collider near Geneva, Switzerland, brought together experimentalists and theoreticians from hundreds of labs worldwide to test the predictions of particle physics’ standard model. 

Like the massive CERN teams, the IBL has created a flat hierarchy and a collaborative decision-making process with near-daily web meetings. Instead of acting only when group consensus is reached, teams will make decisions by simple consent. “No one will be able to stop a proposed experiment being carried out without a very convincing proposal of why it would be a disaster,” says Alexandre Pouget, an IBL member and a theoretician at the University of Geneva in Switzerland.

So far, says Andreas Herz, a theoretical neuroscientist at the Ludwig Maximilian University of Munich, Germany, "neuroscience has been stuck in an exploratory phase". The IBL will aim to generate and test unifying theories about how the brain encodes and computes information – seeking to come up with the equivalent of physicists’ standard model.

But the IBL is hardly unique among big neuroscience projects in melding theory and practice, points out neuroanatomist Katrin Amunts at the Jülich Research Centre in Germany. Amunts also chairs the scientific board of Europe’s Human Brain Project, an initiative that is taking a more conventional approach to collaboration in its own attempts to understand how the brain works. “The future will show which is the best,” she says.

The IBL’s principal investigators, who include data-analysis experts as well as experimental and theoretical neuroscientists, will dedicate around 20% of their time to the effort. During its first two years, the IBL will build informatics tools for automatic data-sharing and establish a reliable experimental protocol for a basic foraging task in mice. Members will be required to register their experiments before they start, and results will be instantly visible to the whole collaboration.

“It is a big challenge—and it’s not the way the field works at the moment,” says Anne Churchland, an IBL member at Cold Spring Harbor Laboratory, New York.

In experimental neuroscience, the slightest parameter change can alter the outcomes of the experiment. The IBL’s standard protocol attempts to address all possible sources of variability, from the mice’s diets to the timing and quantity of light they are exposed to each day and the type of bedding they sleep on. Every experiment will be replicated in at least one separate lab, using identical protocols, before its results and data are made public. 

“This sort of approach will help solve the reproducibility crisis,” says Christof Koch, president of the Allen Institute for Brain Science.

Expanding the IBL beyond its pilot phase will require much more than $13 million, Pouget acknowledges. After the foraging protocol is established, the project’s second phase will test specific theories relating to how the brain integrates diverse information to make moment-by-moment decisions. He also hopes to enrol many more labs and broaden the suite of behaviours studied.

For Herz, a theoretician who is part of an influential computational-neuroscience network, it’s about time neuroscience adopted such rigour. “A hundred years from now,” he says, “people will look back and wonder why it hadn’t, until now, been possible to do a more physics-based approach of designing experiments to consolidate or disprove theories.”

_This article is reproduced with permission and was first published on September 19, 2017._


----------



## ae1905

scienmag.com *

Scientists create world's first 'molecular robot' capable of building molecules*

By Scienmag
[HR][/HR]







Image credit: Stuart Jantzen, Biocinematics 

Scientists at The University of Manchester have created the world’s first ‘molecular robot’ that is capable of performing basic tasks including building other molecules. The tiny robots, which are a millionth of a millimetre in size, can be programmed to move and build molecular cargo, using a tiny robotic arm.

Each individual robot is capable of manipulating a single molecule and is made up of just 150 carbon, hydrogen, oxygen and nitrogen atoms. To put that size into context, a billion billion of these robots piled on top of each other would still only be the same size as a single grain of salt. The robots operate by carrying out chemical reactions in special solutions which can then be controlled and programmed by scientists to perform the basic tasks.

In the future such robots could be used for medical purposes, advanced manufacturing processes and even building molecular factories and assembly lines. The research will be published in Nature on Thursday 21st September.

Professor David Leigh, who led the research at University’s School of Chemistry, explains: ‘All matter is made up of atoms and these are the basic building blocks that form molecules. Our robot is literally a molecular robot constructed of atoms just like you can build a very simple robot out of Lego bricks. The robot then responds to a series of simple commands that are programmed with chemical inputs by a scientist.

‘It is similar to the way robots are used on a car assembly line. Those robots pick up a panel and position it so that it can be riveted in the correct way to build the bodywork of a car. So, just like the robot in the factory, our molecular version can be programmed to position and rivet components in different ways to build different products, just on a much smaller scale at a molecular level.’

The benefit of having machinery that is so small is it massively reduces demand for materials, can accelerate and improve drug discovery, dramatically reduce power requirements and rapidly increase the miniaturisation of other products. Therefore, the potential applications for molecular robots are extremely varied and exciting.

Prof Leigh says: ‘Molecular robotics represents the ultimate in the miniaturisation of machinery. Our aim is to design and make the smallest machines possible. This is just the start but we anticipate that within 10 to 20 years molecular robots will begin to be used to build molecules and materials on assembly lines in molecular factories.’

Whilst building and operating such tiny machine is extremely complex, the techniques used by the team are based on simple chemical processes.

Prof Leigh added: ‘The robots are assembled and operated using chemistry. This is the science of how atoms and molecules react with each other and how larger molecules are constructed from smaller ones.

‘It is the same sort of process scientists use to make medicines and plastics from simple chemical building blocks. Then, once the nano-robots have been constructed, they are operated by scientists by adding chemical inputs which tell the robots what to do and when, just like a computer program.’


----------



## ae1905

blogs.discovermagazine.com *

Can Neuroscience Inform Everyday Life? The "Translation Problem"*
[HR][/HR]A new paper asks why neuroscience hasn’t had more _“impact on our daily lives.” _

The article, *Neuroscience and everyday life: facing the translation problem, *comes from Dutch researchers Jolien C. Francken and Marc Slors. It’s a thought-provoking piece, but it left me feeling that the authors are expecting too much from neuroscience. I don’t think insights from neuroscience are likely to change our lives any time soon.

Francken and Slors describe a disconnect between neuroscience research and everyday life, which they dub the ‘translation problem’. The root of the problem, they say, is that while neuroscience uses words drawn from everyday experience – ‘lying’, ‘love’, ‘memory’, and so on – neuroscientists rarely use these terms in the usual sense. Instead, neuroscientists will study particular _aspects_ of the phenomena in question, using particular (often highly artificial) experimental tasks.

As a result, say Francken and Slors, the neuroscience of (say) ‘love’ does not directly relate to ‘love’ as the average person would use the word:

We should be cautious in interpreting the outcomes of neuroscience experiments simply as, say, results about ‘lying ’, ‘free will ’, ‘love’, or any other folk-psychological category. How then can neuroscientific findings be translated in terms that speak to our practical concerns in a nonmisleading, non-naive way?​
They go on to discuss the nature of the translation problem in much more detail, as well as potential solutions.

In my view, Francken and Slors are quite right that neuroscience often studies particular aspects of phenomena that are quite far removed from everyday reality. A study of emotion, for instance, might provoke positive emotions using pictures of chocolate while using bloody gore images for the negative stimuli. Clearly, emotion is rather more complex than that.

Neuroscientists have their reasons for using these kinds of simplistic experimental set-ups, of course. They provide reliable, controllable emotional responses, something less easy to achieve in the real world. There is also value in using well-studied tasks, to permit comparisons with previous work, even if the tasks might not be ideal.

I do think that neuroscience should endeavor to better approximate real life – to become more naturalistic, as the phrase goes. I also think that neuroscientists often need to clarify their concepts. Francken and Slors make the same recommendations, but for different reasons than I do. I think the main benefit of this would be that neuroscience would be better able to understand the brain. Francken and Slors however suggest that neuroscience could be (and ought to be) able to change the way we think about everyday issues:

If… our everyday ‘folk-psychology ’ could be operationalised unproblematically and unambiguously in neuroscientific experiments, the outcomes of these experiments would ideally directly inform [everyday] practices.​
I disagree. I don’t see any reason why neuroscience would change our everyday lives. To put it simply, we already know _what _our brains do – we are familiar with the behaviours and experiences that make up human life (i.e. with psychology, broadly defined). Neuroscience is the search to understand _how _the brain does what it does, but this knowledge won’t change the facts of psychology.

To give an example, we now know a great deal about the structure and function of the retina (which is part of the brain.) Retinal biology is useful in diagnosing and treating retinal diseases. But it hasn’t changed how we use our retinas in everyday life, or how we think about vision. We already knew how to use our retinas; science just explained why the retina works the way it works.

So I don’t think that knowing (say) the neuroscience of decision-making would help us to make better decisions. In general, I don’t see Francken and Slors’ ‘translation problem’ _as_ a problem. We shouldn’t look to neuroscience for life tips.


----------



## ae1905

bbc.com *

New antibody attacks 99% of HIV strains*

By James Gallagher Health and science reporter, BBC News website
[HR][/HR]







Image copyright SPL Scientists have engineered an antibody that attacks 99% of HIV strains and can prevent infection in primates. 

It is built to attack three critical parts of the virus - making it harder for HIV to resist its effects.

The work is a collaboration between the US National Institutes of Health and the pharmaceutical company Sanofi.

The International Aids Society said it was an "exciting breakthrough". Human trials will start in 2018 to see if it can prevent or treat infection. 

Our bodies struggle to fight HIV because of the virus' incredible ability to mutate and change its appearance.

These varieties of HIV - or strains - in a single patient are comparable to those of influenza during a worldwide flu season. 

So the immune system finds itself in a fight against an insurmountable number of strains of HIV. 

*Super-antibodies*

But after years of infection, a small number of patients develop powerful weapons called "broadly neutralising antibodies" that attack something fundamental to HIV and can kill large swathes of HIV strains. 

Researchers have been trying to use broadly neutralising antibodies as a way to treat HIV, or prevent infection in the first place. 

The study, published in the journal Science, combines three such antibodies into an even more powerful "tri-specific antibody".

Dr Gary Nabel, the chief scientific officer at Sanofi and one of the report authors, told the BBC News website: "They are more potent and have greater breadth than any single naturally occurring antibody that's been discovered."

The best naturally occurring antibodies will target 90% of HIV strains. 

"We're getting 99% coverage, and getting coverage at very low concentrations of the antibody," said Dr Nabel.

Experiments on 24 monkeys showed none of those given the tri-specific antibody developed an infection when they were later injected with the virus. 

Dr Nabel said: "It was quite an impressive degree of protection." 

The work included scientists at Harvard Medical School, The Scripps Research Institute, and the Massachusetts Institute of Technology. 

*'Exciting'*

Clinical trials to test the antibody in people will start next year. 

Prof Linda-Gail Bekker, the president of the International Aids Society, told the BBC: "This paper reports an exciting breakthrough.

"These super-engineered antibodies seem to go beyond the natural and could have more applications than we have imagined to date. 

"It's early days yet, and as a scientist I look forward to seeing the first trials get off the ground in 2018. 

"As a doctor in Africa, I feel the urgency to confirm these findings in humans as soon as possible."

Dr Anthony Fauci, the director of the US National Institute of Allergy and Infectious Diseases, said it was an intriguing approach.

He added: "Combinations of antibodies that each bind to a distinct site on HIV may best overcome the defences of the virus in the effort to achieve effective antibody-based treatment and prevention."

Follow James on Twitter.


----------



## ae1905

blogs.discovermagazine.com *

We Can Hold on to Glass, Thanks to Sweat*
[HR][/HR] Thanks to sweat, you can drink in peace. _(Credit: Yellow Cat/Shutterstock)
_
Our fingers take a little while to get used to glass.

New research shows that when we touch a glass object, whether it be a cup or a smartphone screen, it can take up to 20 seconds before our fingers form a good grip. This is in contrast to things like rubber, which may be just as smooth, but are much softer and easier to hold.

Glass is slippery, obviously, but we can still get a handle on it thanks to small attractive forces between the molecules of the object and our fingers. More contact surface area equates to more friction and researchers from France and the United Kingdom found that our fingers can adapt to slippery surfaces by changing the amount of skin that touches them — which is where the sweat comes in.

*Sweat It Out*

The adaptions happens on a very small scale and involve the hard outer layer of our skin. The skin surface is fairly tough and contains ridges and grooves — your fingerprints. Those grooves are good for grabbing onto rough surfaces, or soft things like rubber, but slip right off when faced with glass. So, when our bodies sense something smooth and hard, they secrete a little bit of sweat, just enough to soften up the keratin in our skin. This makes those ridges pliable and flattens them out, putting more skin in contact with the glass and creating more friction.

In experiments published Monday in the _Proceedings of the National Academy of Science_ using a glass prism and a camera, the researchers watched this process unfold. They found it didn’t happen immediately, in fact it could take anywhere from a few seconds up to twenty. The rate varied based on a number of factors including the object’s weight, how fast it was applied and the person being tested. When they tried the same thing with rubber the process happened so quickly they couldn’t even catch it on camera, likely because the soft surface let the ridges on our skin sink right in, with no need for any softening.

The findings are an indication that our bodies sense when an object is too smooth to grab onto and react accordingly, but could also factor into future touchscreen designs. So-called “haptic screens” could one day use small electric currents or ultrasonic vibrations to simulate sensations of roughness and texture on glass. By showing that tiny changes in skin consistency change how we interact with the world, researchers design a better experience with the glass we touch.

The small fluctuations in skin pliability that accompany our touching a smooth surface could also reveal how we can tell glass and rubber apart from just a touch.


----------



## ae1905

blogs.discovermagazine.com 

*An Unprecedented Number Of Species Have Crossed The Pacific On Tsunami-Liberated Plastic Debris*
[HR][/HR]







These Asian amur sea stars (_Asterias amurensis_) were found ~5,000 miles from home on the Oregon coast.
Image provided by Oregon State University

March 11, 2011, 2:46 PM, 45 miles east of Tōhoku, Japan. Fifteen miles beneath the waves, a magnitude-9 megathrust earthquake strikes. The Pacific and Eurasian tectonic plates suddenly shift, shaking the surrounding crust for six minutes and creating a tidal wave almost 40 meters high, which races towards the coast of Japan. In the hours that follow, it claims at least 15,894 lives, with thousands more unaccounted for. More than a million buildings are damaged or destroyed, causing nearly $200 billion in damages.

The remnants of those buildings and all sorts of debris liberated by the moving waters have since spread the tsunami’s legacy far beyond the site of impact. As a new study in the journal _Science_ explains, thanks to objects set adrift by the tsunami’s waves, more than two hundred and eighty species have been found on the wrong side of the ocean.

How did hundreds of animals hitch rides across such vast distances? Well, to paraphrase the slogan from America’s Plastics Makers, plastics made it possible.

*“This has turned out to be one of the biggest, unplanned, natural experiments in marine biology, perhaps in history.”
—John Chapman, Oregon State University*

Human beings are increasingly helping animals and plants move from one place to another, often causing unforeseeable and sometimes devastating ecological effects. Our penchant for seafaring allowed rats and other pests to colonize vulnerable islands where they prey on defenseless endemic species, for example. It’s likely that flights to Guam carried snake stowaways nearly singlehandedly responsible for the eradication of the island’s endemic birds.

Now, our passion for plastic has led to long-lasting oceanic rafts which have ferried nearly 300 species thousands of miles from where they belong—whether they’ll take hold there remains to be seen.

*A Tsunami of Debris

*The movement of living things around the globe—what scientists call ‘dispersal’—can happen many ways. Lightweight seeds can be blown by the wind, for example. And floating debris can create oases for life to move across oceans, a dispersal method referred to as “rafting”.

Oceanic rafting certainly isn’t new—it’s thought that many lineages of animals and plants, like South America’s monkeys and Madagascar’s lemurs, rafted their way to the lands they now call home. But successful rafting events like these are thought to be rare, as common rafting materials like wood often degrade before they find their way to a new shore.

Plastics, on the other hand, don’t.







The flow of species after the 2011 tsunami. Infographic Credit: Carla Schaffer / AAAS

Millions of objects, from small pieces of plastic to entire boats and docks, were washed to sea by the 2011 tsunami. About a year later, in the spring and summer of 2012, those objects started to appear around the Pacific. A dock from Misawa in Oregon on June 5th. A boat from Miyagi Prefecture on a beach in Ilwaco, Washington on June 15th. A tote on Midway Atoll that December; a refrigerator in Ocean City State Park, Washington, that February. And these bits and pieces of devastation just kept coming—researchers continued to collect items well into this year.

In total, a team of scientists hailing from Oregon State University, Oregon Institute of Marine Biology, Portland State University, Williams College, Moss Landing Marine Laboratories, and the Smithsonian Environmental Research Center documented 634 different objects that were transported thousands of miles from Japan to the islands of Hawai‘i and mainland North America from Alaska to California.

More surprising than the volume of objects, though, was what was found on them. The chunks of debris carried with them thriving communities of coastal organisms from the western Pacific.

*Unnatural Explorers
*
“When we first saw species from Japan arriving in Oregon, we were shocked. We never thought they could live that long, under such harsh conditions,” said John Chapman, OSU marine scientist and coauthor on the paper, in a press release.

Over five years of the study, the researchers documented 289 living invertebrate and fish species from tsunami debris, including a bountiful diversity of barnacles, crabs, urchins, mussels, and even fish. Many of these were found associated with a single object. Since the researchers were only able to sample a minuscule portion of the millions of items that washed out to sea, that means dozens, perhaps hundreds or even thousands of other species may have also made the trek.








Japanese mussels (_Mytilus galloprovincialis_), barnacles (_Megabalanus rosa_), and sea anemones on a tsunami buoy washing ashore on Long Beach, Washington in February 2017. Image Credit: Nancy Treneman

While the researchers were amazed at the sheer ruggedness of marine life, it was equally surprising that the debris survived so long at sea.

For the first two years, many objects were from natural materials—wooden boards, pallets, boats. But over time, fewer and fewer such items washed ashore. Yet debris kept arriving; the researchers quickly realized that the unprecedented volume of moving species was due to the ubiquitous use of plastics and other non-biodegradable, manmade materials.

Lead author of the study John Carlton pointed out that more than 10 million tons of plastic waste enters the ocean every year—creating millions of opportunities for species to move about. “And given that hurricanes and typhoons that could sweep large amounts of debris into the oceans are predicted to increase due to global climate change, there is huge potential for the amount of marine debris in the oceans to increase significantly.”

*Non-Nuclear Fallout
*
The cities and towns hit by the tsunami are still dealing with the wreckage. The site of the Fukushima nuclear reactor is still undergoing cleanup. And thousands of miles across the Pacific, pieces of the hard-hit Japanese communities are still carrying bits of their homeland to new shores.

The organisms hitching rides on tsunami debris aren’t necessarily on their deathbeds. “Not only were new species still being detected on tsunami debris in 2017, but nearly 20 percent of the species that arrived were capable of reproduction,” said Jessica Miller, a marine ecologist with OSU. If they’re able to reproduce, then they could end up making the Eastern Pacific their new home—potentially at the expense of current residents.







Marine sea slugs on a Japanese derelict vessel from Iwate Prefecture washed ashore in Oregon, April 2015. Image Credit: John W. Chapman

“These vast quantities of non-biodegradable debris, potentially acting as novel ocean transport vectors, are of increasing concern,” said Carlton, “given the vast economic cost and environmental impacts documented from the proliferation of marine invasive species around the world.”

Though to date, no species transported by the tsunami debris are known to have established along the US mainland, Carlton is skeptical that the bullet has been dodged.

“It would not surprise me if there were species from Japan that are out there living along the Oregon coast,” he said. “In fact, it would surprise me if there weren’t.”

Citation: Carlton et al. 2017. Tsunami-driven rafting: Transoceanic species dispersal and implications for marine biogeography. _Science._ doi:10.1126/science.aao1498


----------



## ae1905

*The Nobel Prize in Physiology or Medicine 2017
*
Jeffrey C. Hall, Michael Rosbash, Michael W. Young

2017-10-02

The Nobel Assembly at Karolinska Institutet has today decided to award

the 2017 Nobel Prize in Physiology or Medicine

jointly to

Jeffrey C. Hall, Michael Rosbash and Michael W. Young

for their discoveries of molecular mechanisms controlling the circadian rhythm
Summary

Life on Earth is adapted to the rotation of our planet. For many years we have known that living organisms, including humans, have an internal, biological clock that helps them anticipate and adapt to the regular rhythm of the day. But how does this clock actually work? Jeffrey C. Hall, Michael Rosbash and Michael W. Young were able to peek inside our biological clock and elucidate its inner workings. Their discoveries explain how plants, animals and humans adapt their biological rhythm so that it is synchronized with the Earth's revolutions.

Using fruit flies as a model organism, this year's Nobel laureates isolated a gene that controls the normal daily biological rhythm. They showed that this gene encodes a protein that accumulates in the cell during the night, and is then degraded during the day. Subsequently, they identified additional protein components of this machinery, exposing the mechanism governing the self-sustaining clockwork inside the cell. We now recognize that biological clocks function by the same principles in cells of other multicellular organisms, including humans.

With exquisite precision, our inner clock adapts our physiology to the dramatically different phases of the day. The clock regulates critical functions such as behavior, hormone levels, sleep, body temperature and metabolism. Our wellbeing is affected when there is a temporary mismatch between our external environment and this internal biological clock, for example when we travel across several time zones and experience "jet lag". There are also indications that chronic misalignment between our lifestyle and the rhythm dictated by our inner timekeeper is associated with increased risk for various diseases.
Our inner clock

Most living organisms anticipate and adapt to daily changes in the environment. During the 18th century, the astronomer Jean Jacques d'Ortous de Mairan studied mimosa plants, and found that the leaves opened towards the sun during daytime and closed at dusk. He wondered what would happen if the plant was placed in constant darkness. He found that independent of daily sunlight the leaves continued to follow their normal daily oscillation (Figure 1). Plants seemed to have their own biological clock.

Other researchers found that not only plants, but also animals and humans, have a biological clock that helps to prepare our physiology for the fluctuations of the day. This regular adaptation is referred to as the circadian rhythm, originating from the Latin words circa meaning "around" and dies meaning "day". But just how our internal circadian biological clock worked remained a mystery.

Mimosa plants in window

Figure 1. An internal biological clock. The leaves of the mimosa plant open towards the sun during day but close at dusk (upper part). Jean Jacques d'Ortous de Mairan placed the plant in constant darkness (lower part) and found that the leaves continue to follow their normal daily rhythm, even without any fluctuations in daily light.

Identification of a clock gene

During the 1970's, Seymour Benzer and his student Ronald Konopka asked whether it would be possible to identify genes that control the circadian rhythm in fruit flies. They demonstrated that mutations in an unknown gene disrupted the circadian clock of flies. They named this gene period. But how could this gene influence the circadian rhythm?

This year's Nobel Laureates, who were also studying fruit flies, aimed to discover how the clock actually works. In 1984, Jeffrey Hall and Michael Rosbash, working in close collaboration at Brandeis University in Boston, and Michael Young at the Rockefeller University in New York, succeeded in isolating the period gene. Jeffrey Hall and Michael Rosbash then went on to discover that PER, the protein encoded by period, accumulated during the night and was degraded during the day. Thus, PER protein levels oscillate over a 24-hour cycle, in synchrony with the circadian rhythm.
A self-regulating clockwork mechanism

The next key goal was to understand how such circadian oscillations could be generated and sustained. Jeffrey Hall and Michael Rosbash hypothesized that the PER protein blocked the activity of the period gene. They reasoned that by an inhibitory feedback loop, PER protein could prevent its own synthesis and thereby regulate its own level in a continuous, cyclic rhythm (Figure 2A).
Simplified illustration of the feedback regulation of the period gene

Figure 2A. A simplified illustration of the feedback regulation of the period gene. The figure shows the sequence of events during a 24h oscillation. When the period gene is active, period mRNA is made. The mRNA is transported to the cell's cytoplasm and serves as template for the production of PER protein. The PER protein accumulates in the cell's nucleus, where the period gene activity is blocked. This gives rise to the inhibitory feedback mechanism that underlies a circadian rhythm.

The model was tantalizing, but a few pieces of the puzzle were missing. To block the activity of the period gene, PER protein, which is produced in the cytoplasm, would have to reach the cell nucleus, where the genetic material is located. Jeffrey Hall and Michael Rosbash had shown that PER protein builds up in the nucleus during night, but how did it get there? In 1994 Michael Young discovered a second clock gene, timeless, encoding the TIM protein that was required for a normal circadian rhythm. In elegant work, he showed that when TIM bound to PER, the two proteins were able to enter the cell nucleus where they blocked period gene activity to close the inhibitory feedback loop (Figure 2B).
The molecular components of the circadian clock.

Figure 2B. A simplified illustration of the molecular components of the circadian clock.

Such a regulatory feedback mechanism explained how this oscillation of cellular protein levels emerged, but questions lingered. What controlled the frequency of the oscillations? Michael Young identified yet another gene, doubletime, encoding the DBT protein that delayed the accumulation of the PER protein. This provided insight into how an oscillation is adjusted to more closely match a 24-hour cycle.

The paradigm-shifting discoveries by the laureates established key mechanistic principles for the biological clock. During the following years other molecular components of the clockwork mechanism were elucidated, explaining its stability and function. For example, this year's laureates identified additional proteins required for the activation of the period gene, as well as for the mechanism by which light can synchronize the clock.
Keeping time on our human physiology

The biological clock is involved in many aspects of our complex physiology. We now know that all multicellular organisms, including humans, utilize a similar mechanism to control circadian rhythms. A large proportion of our genes are regulated by the biological clock and, consequently, a carefully calibrated circadian rhythm adapts our physiology to the different phases of the day (Figure 3). Since the seminal discoveries by the three laureates, circadian biology has developed into a vast and highly dynamic research field, with implications for our health and wellbeing.
The circadian clock

Figure 3. The circadian clock anticipates and adapts our physiology to the different phases of the day. Our biological clock helps to regulate sleep patterns, feeding behavior, hormone release, blood pressure, and body temperature.

Key publications

Zehring, W.A., Wheeler, D.A., Reddy, P., Konopka, R.J., Kyriacou, C.P., Rosbash, M., and Hall, J.C. (1984). P-element transformation with period locus DNA restores rhythmicity to mutant, arrhythmic Drosophila melanogaster. Cell 39, 369–376.

Bargiello, T.A., Jackson, F.R., and Young, M.W. (1984). Restoration of circadian behavioural rhythms by gene transfer in Drosophila. Nature 312, 752–754.

Siwicki, K.K., Eastman, C., Petersen, G., Rosbash, M., and Hall, J.C. (1988). Antibodies to the period gene product of Drosophila reveal diverse tissue distribution and rhythmic changes in the visual system. Neuron 1, 141–150.

Hardin, P.E., Hall, J.C., and Rosbash, M. (1990). Feedback of the Drosophila period gene product on circadian cycling of its messenger RNA levels. Nature 343, 536–540.

Liu, X., Zwiebel, L.J., Hinton, D., Benzer, S., Hall, J.C., and Rosbash, M. (1992). The period gene encodes a predominantly nuclear protein in adult Drosophila. J Neurosci 12, 2735–2744.

Vosshall, L.B., Price, J.L., Sehgal, A., Saez, L., and Young, M.W. (1994). Block in nuclear localization of period protein by a second clock mutation, timeless. Science 263, 1606–1609.

Price, J.L., Blau, J., Rothenfluh, A., Abodeely, M., Kloss, B., and Young, M.W. (1998). double-time is a novel Drosophila clock gene that regulates PERIOD protein accumulation. Cell 94, 83–95.



Jeffrey C. Hall was born 1945 in New York, USA. He received his doctoral degree in 1971 at the University of Washington in Seattle and was a postdoctoral fellow at the California Institute of Technology in Pasadena from 1971 to 1973. He joined the faculty at Brandeis University in Waltham in 1974. In 2002, he became associated with University of Maine.

Michael Rosbash was born in 1944 in Kansas City, USA. He received his doctoral degree in 1970 at the Massachusetts Institute of Technology in Cambridge. During the following three years, he was a postdoctoral fellow at the University of Edinburgh in Scotland. Since 1974, he has been on faculty at Brandeis University in Waltham, USA.

Michael W. Young was born in 1949 in Miami, USA. He received his doctoral degree at the University of Texas in Austin in 1975. Between 1975 and 1977, he was a postdoctoral fellow at Stanford University in Palo Alto. From 1978, he has been on faculty at the Rockefeller University in New York.


https://www.nobelprize.org/nobel_prizes/medicine/laureates/2017/press.html


----------



## ae1905

*Nobel in Physics for Detecting Gravitational Waves*



By Steve Mirsky on October 3, 2017 

The Nobel Prize in Physics goes to Rainer Weiss, Barry C. Barish and Kip S. Thorne "for decisive contributions to the LIGO detector and the observation of gravitational waves".

“The Royal Swedish Academy of Sciences has decided to award the 2017 Nobel Prize in Physics with one half to Rainer Weiss and the other half jointly to Barry C. Barish and Kip S. Thorne, all of them members of the LIGO/VIRGO collaboration. And the academy citation runs "for decisive contributions to the LIGO detector and the observation of gravitational waves".

Göran Hansson, secretary general of the academy, at 5:52 this morning Eastern time.

“Rainer Weiss was born in 1932 in Berlin in Germany. He received his Ph.D. at the Massachusetts Institute of Technology in the United States, and he is still affiliated with the M.I.T. as professor of physics. Dr. Weiss is since (sic) many years a U.S. citizen. Barry Barish was born in 1936 in Nebraska in the United States. He’s a professor of physics at Caltech, the California Institute of Technology. And finally, Kip Thorne was born in 1940 in Utah, in the U.S., and he’s currently professor of theoretical physics at Caltech.

“And as I mentioned, all three Nobel Laureates are members of the LIGO/Virgo collaboration, a large team of more than a thousand scientists who built and ran the detector that was used to discover gravitational waves. And with that, I’ll give the word to the chairman of the Nobel committee, Nils Mårtensson.”

“On the 14[SUP]th[/SUP] of September, 2015, the Laser Interferometer Gravitational Wave Observatory, LIGO, succeeded for the first time to directly observe gravitational waves. These waves were predicted by Einstein a hundred years ago, but until now they have escaped direct detection. This is a truly remarkable achievement, which crowns almost 50 years of experimental efforts by hundreds of scientists and engineers. And today the LIGO collaboration includes a thousand members from 90 institutions on five continents…we now witness the dawn of a new field: gravitational wave astronomy. This will teach us about the most violent processes in the universe and it will lead to new insights into the nature of extreme gravity.”

For an in-depth listen about the 2017 Nobel Prize in Physics, look for the _Scientific American_ Science Talk podcast later today.

—Steve Mirsky


----------



## Marshy

Science is for the weak. Only idiots believe in it. You have to be a true intellectual to realize that science is a trick based on delusional perception of the weak minded, oh how i pity you all, yet envy you at the same time for i wish i could live in blissful ignorance as you do.


----------



## Pifanjr

Marshy14 said:


> Science is for the weak. Only idiots believe in it. You have to be a true intellectual to realize that science is a trick based on delusional perception of the weak minded, oh how i pity you all, yet envy you at the same time for i wish i could live in blissful ignorance as you do.


What would be the alternative?


----------



## ae1905

scientificamerican.com *

Nobel Chemistry Prize Won for Capturing Proteins in Action*

Josh Fischman
[HR][/HR] Three scientists developed microscope methods that use electrons and cold temperature to reveal tiny details of life’s machinery








Images produced by cryo-electron microscopy show proteins in detail. _Credit: Royal Swedish Academy of Sciences

_ Just before noon today, Stockholm time, three real visionaries struck scientific gold: the 2017 Nobel Prize in Chemistry. The researchers had developed ways of imaging complex proteins at the atomic level, adopting electron microscopes to see how the molecules create antibiotic resistance, convert light into energy for photosynthesis and how the Zika virus functions. “For developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution,” the Royal Swedish Academy of Sciences awarded the Chemistry Prize to Jacques Dubochet of the University of Lausanne in Switzerland, Joachim Frank of Columbia University in New York City, and Richard Henderson of the MRC Laboratory of Molecular Biology in Cambridge, England.

"This discovery is like the Google Earth for molecules in that it takes us down to the fine detail of atoms within proteins,” says chemist Allison Campbell, president of the American Chemical Society, who has done research in biomaterials. And although the Nobel Committee emphasized the biological and medical applications, Campbell says the method can be used to analyze any type of polymer, such as industrial enzymes that break down plastics.

Frank, in a phone call during the Nobel announcement, noted the practical uses were not here yet. “This is not an immediate bedside application. Several years will go by,” he said. And Campbell agrees. “This is technology on the front end,” she says. “But the potential is huge. You could understand any target molecule that you are going after. You can see its shape, and when proteins change shape they change function.”

This is the second chemistry Nobel for microscopy in the past four years. In 2014 Stefan Hell, Eric Betzig, and William Moerner won for increasing the power of light microscopy and allowing scientists to see molecules in action within a living cell, although not at the level of atomic change.

X-ray crystallography had long been the go-to method for chemists and biologists seeking to understand the structure of proteins. It has helped scientists win more than a dozen Nobel Prizes, including the 1962 award for revealing DNA’s double helix, according to a news article in _Nature_, and by 2015 x-rays had been used to determine the structure of about 90 percent of the approximately 100,000 molecules in the popular Protein Data Bank.

But the technique cannot do everything. As its name implies, crystallography requires its targets to be made into crystals. And with many large, complicated molecules found in and around cells—uch as ribosomes, which turn genetic instructions into working proteins—scientists simply could not make that happen.

Electrons, however, can bounce off every atom in a protein and reveal its structure. That structure is three dimensional, and beginning in the 1970s Frank developed a mathematical image-processing method that allowed a computer to merge several two-dimensional electron microscope images into a sharp 3-D picture. Dubochet’s contribution was to show how this kind of microscopy could be used on biomolecules. Molecules such as proteins are surrounded by water that helps them maintain their structures, but electron microscopy dried up the water. Dubochet figured out a way to cool the water rapidly so it became like glass—the form is called vitrified water—and allowed the molecules within to retain their shapes.

Then in 1990, after 15 years’ work refining sample preparation and electron detection, Henderson succeeded in using an electron microscope to create an image of a large bacterial cell membrane protein called bacteriorhodopsin, and do it at atomic resolution. Henderson, with Nigel Unwin, wrote in _Scientific American’s_ February 1984 issue about how the two pioneered the use of electron microscopes to see the details of cell membrane proteins.

Now for any scientist “who is interested in a protein’s structure and function—well, I’d love to have one of these in my laboratory,” Campbell says. Dubochet, Frank, and Henderson will get their medals and each a third of the 9 million Swedish krona (about $1.1 million) prize at the annual Nobel ceremony in Stockholm in December.


----------



## ae1905




----------



## ae1905

*Astronaut Scott Kelly Describes One Year In Space -- And Its After Effects*

53-year-old astronaut Scott Kelly shared a dramatic excerpt from his new book  _Endurance: A Year in Space, A Lifetime of Discovery_ in the _Brisbane Times_, describing his first 48 hours back on earth and what he'd learned on the mission: 

_I push back from the table and struggle to stand up, feeling like a very old man getting out of a recliner... I make it to my bedroom without incident and close the door behind me. Every part of my body hurts. All my joints and all of my muscles are protesting the crushing pressure of gravity. I'm also nauseated, though I haven't thrown up... When I'm finally vertical, the pain in my legs is awful, and on top of that pain I feel a sensation that's even more alarming: it feels as though all the blood in my body is rushing to my legs, like the sensation of the blood rushing to your head when you do a handstand, but in reverse. I can feel the tissue in my legs swelling... Normally if I woke up feeling like this, I would go to the emergency room. But no one at the hospital will have seen symptoms of having been in space for a year...

Our space agencies won't be able to push out farther into space, to a destination like Mars, until we can learn more about how to strengthen the weakest links in the chain that make space flight possible: the human body and mind... [V]ery little is known about what occurs after month six. The symptoms may get precipitously worse in the ninth month, for instance, or they may level off. We don't know, and there is only one way to find out... On my previous flight to the space station, a mission of 159 days, I lost bone mass, my muscles atrophied, and my blood redistributed itself in my body, which strained and shrank the walls of my heart. More troubling, I experienced problems with my vision, as many other astronauts had. I had been exposed to more than 30 times the radiation of a person on Earth, equivalent to about 10 chest X-rays every day. This exposure would increase my risk of a fatal cancer for the rest of my life. 
_ 
Kelly says the Space Station crew performed more than 400 experiments, though about 25% of his time went to tracking his own health. "If we could learn how to counteract the devastating impact of bone loss in microgravity, the solutions could well be applied to osteoporosis and other bone diseases. If we could learn how to keep our hearts healthy in space, that knowledge could be useful on Earth." Kelly says he felt better a few months after returning to earth, adding "It's gratifying to see how curious people are about my mission, how much children instinctively feel the excitement and wonder of space flight, and how many people think, as I do, that Mars is the next step... I know now that if we decide to do it, we can."


----------



## ae1905

newscientist.com *

Half the universe’s missing matter has just been finally found*

By Leah Crane








Discoveries seem to back up many of our ideas about how the universe got its large-scale structure

Andrey Kravtsov (The University of Chicago) and Anatoly Klypin (New Mexico State University). Visualisation by Andrey Kravtsov

The missing links between galaxies have finally been found. This is the first detection of the roughly half of the normal matter in our universe – protons, neutrons and electrons – unaccounted for by previous observations of stars, galaxies and other bright objects in space.

You have probably heard about the hunt for dark matter, a mysterious substance thought to permeate the universe, the effects of which we can see through its gravitational pull. But our models of the universe also say there should be about twice as much ordinary matter out there, compared with what we have observed so far.

Two separate teams found the missing matter – made of particles called baryons rather than dark matter – linking galaxies together through filaments of hot, diffuse gas.

“The missing baryon problem is solved,” says Hideki Tanimura at the Institute of Space Astrophysics in Orsay, France, leader of one of the groups. The other team was led by Anna de Graaff at the University of Edinburgh, UK.

Because the gas is so tenuous and not quite hot enough for X-ray telescopes to pick up, nobody had been able to see it before.

“There’s no sweet spot – no sweet instrument that we’ve invented yet that can directly observe this gas,” says Richard Ellis at University College London. “It’s been purely speculation until now.”

So the two groups had to find another way to definitively show that these threads of gas are really there.
Both teams took advantage of a phenomenon called the Sunyaev-Zel’dovich effect that occurs when light left over from the big bang passes through hot gas. As the light travels, some of it scatters off the electrons in the gas, leaving a dim patch in the cosmic microwave background – our snapshot of the remnants from the birth of the cosmos.

*Stack ‘em up*

In 2015, the Planck satellite created a map of this effect throughout the observable universe. Because the tendrils of gas between galaxies are so diffuse, the dim blotches they cause are far too slight to be seen directly on Planck’s map.

Both teams selected pairs of galaxies from the Sloan Digital Sky Survey that were expected to be connected by a strand of baryons. They stacked the Planck signals for the areas between the galaxies, making the individually faint strands detectable en masse.

Tanimura’s team stacked data on 260,000 pairs of galaxies, and de Graaff’s group used over a million pairs. Both teams found definitive evidence of gas filaments between the galaxies. Tanimura’s group found they were almost three times denser than the mean for normal matter in the universe, and de Graaf’s group found they were six times denser – confirmation that the gas in these areas is dense enough to form filaments.
“We expect some differences because we are looking at filaments at different distances,” says Tanimura. “If this factor is included, our findings are very consistent with the other group.”

Finally finding the extra baryons that have been predicted by decades of simulations validates some of our assumptions about the universe.

“Everybody sort of knows that it has to be there, but this is the first time that somebody – two different groups, no less – has come up with a definitive detection,” says Ralph Kraft at the Harvard-Smithsonian Center for Astrophysics in Massachusetts.

“This goes a long way toward showing that many of our ideas of how galaxies form and how structures form over the history of the universe are pretty much correct,” he says.

*Journal references:* _arXiv_, 1709.05024 and 1709.10378v1
*Read more:* Galaxies in filaments spaced like pearls on a necklace


----------



## ae1905

scientificamerican.com *

Astronomers Are Finally Mapping the “Dark Side” of the Milky Way*

Lee Billings
[HR][/HR] Think of the Milky Way—or search for pictures of it online—and you’ll see images of a standard spiral galaxy viewed face-on, a sprawling pinwheel of starlight and dust containing hundreds of billions of stars. These images, however, are mostly make-believe.

We know the Milky Way is a star-filled spiral galaxy in excess of 100,000 light-years wide, and we know our solar system drifts between two spiral arms at its outskirts, some 27,000 light-years from its center. But much beyond that, our knowledge fades. No space probe or telescope built by humans has ever escaped the Milky Way to turn back and take a portrait; because we are embedded in our galaxy’s disk, we can only see it as a bright band of stars across the sky. For astronomers trying to map it, the effort is a bit like learning the anatomy of a human body from the perspective of a single skin cell somewhere on a forearm. How many spiral arms does the Milky Way have, and how do those spiral arms branch and curl around the galaxy? How many stars does the Milky Way really contain? How much does it weigh? What does our cosmic home actually look like, viewed from another nearby galaxy? Ask an astronomer—and if he or she is being perfectly honest, you will learn that we do not fully know.

Among the biggest obstacles to our knowledge is the disk of the galaxy itself, particularly its center, which is thick with starlight-absorbing dust and rife with energetic astrophysical outbursts that can ruin delicate observations. This means we know very little about the other side of the galaxy. “Optically, it’s like trying to look through a velvet cloth—black as black can be,” says Thomas Dame, an astronomer at Harvard–Smithsonian Center for Astrophysics (CfA). “In terms of tracing and understanding the spiral structure, essentially half of the Milky Way is terra incognita.” Now, however, new record-breaking measurements are allowing astronomers to pierce the veil of the galactic center as never before, and to construct the best-ever maps of our galaxy’s structure.

Instead of using visible light, Dame and others map the Milky Way by looking for radio emissions from molecular gas clouds and massive, young stars, both of which typically reside in spiral arms. The challenge lies in measuring, in the absence of convenient intergalactic road signs or distance markers, how far off these objects are. Without knowing these distances, astronomers cannot precisely situate any given radio source within the galaxy to accurately reconstruct the Milky Way’s morphology. Since the 1950s astronomers have solved this problem using “kinematic distances,” calculations that treat objects in the Milky Way a bit like pieces of flotsam spiraling into a whirlpool; because things tend to move faster as they approach the center, measuring how fast an object is moving toward or away from us yields an estimate of its distance from the galactic center—and thus from our solar system. Kinematic distances have helped Dame and others discover previously unknown spiral arms and spiral-arm substructures on our solar system’s side of the Milky Way. But the technique breaks down for peering directly across the galaxy, where objects do not move toward or away from us at all but rather purely perpendicularly to our line of sight. To map the Milky Way’s hidden half requires a more direct method.

In a study published October 12 in _Science_, Dame and an international team of colleagues have demonstrated just that. Using the National Science Foundation’s Very Long Baseline Array (VLBA), an interlinked system of 10 radio telescopes stretching across Hawaii, North America and the Caribbean, the astronomers have directly measured the distance to an object called G007.47+00.05, a star-forming region located on the opposite side of the galaxy from our solar system. The measurement showed the region to be some 66,000 light-years away—nearly 40,000 light-years beyond the galactic center, and roughly double the distance of the previous record-holding direct measurement of distance in the Milky Way.

The team relied on a timeworn technique called parallax, which measures the apparent shift in an object’s celestial position when seen from opposing sides of the Earth’s orbit around the sun. You can see parallax on smaller scales simply by holding a finger in front of your face and winking one eye then the other. Your finger will seem to jump from side to side; calculating its distance from your face is as simple as measuring the angle of its apparent shift. The smaller the angle, the greater the distance. And the wider the distance between your two detectors, be they eyes or radio dishes, the more acute your measurement can be.

The VLBA’s parallax observations took place in 2014, when Earth was on one side of its orbit, and then *six months* later in 2015, when our planet was on the opposite side of the sun. This maximized the sensitivity of the technique, allowing it to measure the minuscule shift in the apparent position of the distant star-forming region. According to lead author Alberto Sanna, a postdoctoral researcher at the Max Planck Institute for Radio Astronomy in Germany, the VLBA’s measurement is “equivalent to seeing a baseball on the surface of the moon.” The feat, Sanna says, shows “we can measure the whole extent of our galaxy, to accurately number and map the Milky Way’s spiral arms and know their true shapes, so that we can learn what the Milky Way really looks like.”

“It really is excellent work—I believe this is the smallest parallax ever obtained, and it is certainly a milestone in modern observational astronomy,” says Mareki Honma, an astronomer at the National Astronomical Observatory of Japan. Honma led a separate team that independently measured the distance to G007.47+00.05 in 2016, finding a similar value. Those measurements, however, were not accurate enough to obtain parallax, and relied instead on tracking the star-forming region’s so-called “proper” across the plane of the sky. The similarity between the two teams’ results, Honma says, suggests proper motion alone can be a useful tool for determining distances to objects on the other side of the galaxy.

Already, the confirmed distance for this particular star-forming region is redrawing galactic maps. In 2011 Dame and colleagues used radio measurements to tentatively trace the path of one spiral arm, called Scutum–Centaurus. Their fragmentary measurements suggested this arm might wrap around almost the entirety of the Milky Way, but they lost its trail—and crucial evidence for its galaxy-encircling breadth—in the vicinity of the dark, roiling galactic center. This star-forming arm “runs right through one of the features we identified in 2011, and adds evidence that the Scutum–Centaurus arm is really a major structure in our galaxy,” Dame says. “In 2011 we wrote that we may never sort this out, because proving its distance through the galactic center would be so difficult—but we were being shortsighted, because here it is, six years later!”
The VLBA’s painstaking, Earth-orbit-spanning measurement occurred as part of a larger project, the Bar and Spiral Structure Legacy Survey (BeSSeL) led by Mark Reid, who like Dame is a radio astronomer at the CfA and a co-author on the _Science_ study. Now in its concluding stages, BeSSeL used 3,500 hours on the VLBA to obtain more than 200 distance measurements for star-forming regions scattered throughout the Milky Way. Many of these readings are now tracing out new details in the galaxy’s filigree of spiral arms.

Which is a good start—but being in the Northern Hemisphere, the VLBA and BeSSeL cannot survey most of star-forming regions visible from the southern sky. And even if they could, parallax alone will not fill in the galactic map. Because each parallax measurement for far-distant star-forming regions on the other side of the galaxy is so difficult and time-consuming to obtain, astronomers widely agree they will chiefly serve as important calibration points to augment existing kinematic distance measurements. Further progress will come from a combination of parallax, proper motion and kinematic distance data via surveys using Southern Hemisphere–based radio telescopes as well as from space-based data from the European Space Agency’s Gaia satellite. The latter is using visible-light parallax measurements to pin down the precise positions for a billion of the Milky Way’s stars. Taken together, the resulting map will help astronomers pin down many still-unknown fundamental aspects of our galaxy such as how fast and uniformly it rotates. This will let them finally determine just how massive the Milky Way really is, potentially yielding new insights into our galaxy’s inventory of stars, dark matter and small satellites that lurk at its edges. All of this will help scientists understand how the Milky Way first came to be, and much that has happened to it since.

“How important is it, really, for us to be able to see clear across to the other side of our own galaxy?” asks Tom Bania, a radio astronomer at Boston University involved in some of the southern surveys. “It is the most important thing in all of astrophysics. It took humankind thousands of years to map the Earth accurately; a map of the galaxy will constrain about a dozen or so models of the structure and evolution of the Milky Way. To me, perhaps the ‘Holy Grail’ of astronomy is to provide a clear perspective of our relationship to the physical universe. The map of our galaxy is a part of that, and that map is still incomplete.”

Soon, that could change. Thanks to BeSSeL and its ilk, Reid notes, “in only a few more years we should have a map that shows us what the Milky Way really looks like.”


----------



## ae1905




----------



## ae1905

scientificamerican.com *

Gravitational Wave Astronomers Hit Mother Lode*

Lee Billings
[HR][/HR] Ushering in the beginning of a new era in astronomy and physics, scientists on Monday announced they have for the first time detected the spacetime ripples known as gravitational waves from the collision of two neutron stars. Streaming in from the sky over the Indian Ocean on August 17, the waves registered at the twin detecting stations of the U.S.-based Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO), as well as a European detector called Virgo located in Italy. This is the fifth time in the last two years that scientists have confirmed spotting such waves, a phenomenon that Einstein first predicted more than a century ago—and that led to this year’s Nobel Prize in Physics for three of LIGO’s leaders.

All of the previously detected gravitational waves, however, came from merging pairs of black holes. These objects are so dense that light cannot escape their grasp, making such mergers essentially invisible to normal telescopes despite the prodigious gravitational waves they generate in the final moments of their incredibly violent death spirals. Without a much-larger network of gravitational wave observatories, astronomers cannot pin down the precise locations of merging black holes, let alone deeply investigate them.

But neutron-star mergers begin with objects that in comparison to black holes can be featherweights. A neutron star is the highly compressed core of an expired massive star, and is formed in the aftermath of a supernova explosion. Its gravitational field is strong enough to squeeze and break down an entire sun’s worth of matter into a city-sized orb of neutrons, making it less a true “star” and more an atomic nucleus as big as Manhattan. But a neutron star’s gravity is still too weak to trap light. So the flash from two of them slamming together can escape into the cosmos, producing not just gravitational waves but also one of the universe’s most brilliant fireworks displays for anyone who cares to look.

In this case, after the initial chirp of gravitational waves signaling the onset of the merger, the “fireworks” consisted of a two-second-long gamma ray burst (GRB) followed by a weeks-long, multi-wavelength afterglow—and “anyone” proved to be nearly every astronomer and physicist on Earth who had found out about the event. The astronomers working with the the LIGO and Virgo physicists had been sworn to secrecy. But the sheer volume of follow-up observations around the world unavoidably spawned public rumors, now confirmed, about a global campaign to track the collision and its aftermath. The resulting frenzy of new observations and theories is the most potent example yet of “multimessenger” astronomy, an emerging field in which light, gravitational waves and subatomic particles emitted from astrophysical cataclysms are collected and studied in unison.

In an overwhelmingly mammoth series of papers published simultaneously across several journals, researchers are linking the latest event to a vast range of phenomena and providing fresh insights on everything from fundamental nuclear physics to the large-scale evolution of the universe. Among other things, the merger gave observers a front-row seat at the birth of a black hole, which the colliding neutron stars likely produced. The discovery that most glitters, though, is smoking-gun evidence that neutron star mergers—rather than run-of-the-mill supernovae—are the cosmic crucibles that forge the universe’s heavy elements: substances including uranium, platinum and gold.

So it looks as if the radioactive pile in a nuclear reactor, the catalytic converter in your car, and yes, the precious metal in your wedding band may all come from the smashed-up innards of the universe’s smallest, densest and most exotic stars—or at least whatever fraction can escape without falling into a merger’s resulting black hole. The result could solve an ongoing debate over the cosmic origins of heavy elements that has possessed theorists for more than half a century. The bulk of the universe’s hydrogen and helium was produced in the first moments after the big bang, and most of the lighter elements—oxygen, carbon, nitrogen and so on—were formed from nuclear fusion in stars. But the origin of the heaviest elements had been a lingering question until now.

“We have hit the mother lode!” says Laura Cadonati, an astrophysicist at Georgia Institute of Technology and LIGO’s deputy spokesperson. “This is really the first time we have multimessenger detection of a single astrophysical event, where gravitational waves are telling us the story of what happened before the cataclysm and the electromagnetic emissions are telling us what happened after.” Although presently inconclusive, Cadonati says, analyses of the event’s gravitational waves could eventually reveal details of how matter “sloshes around” within neutron stars as they merge, giving researchers a new way to study these bizarre objects and learn just how big they can get before collapsing into a black hole. Relatedly, Cadonati notes, there was a mysterious gap of about two seconds between the end of the gravitational-wave chirp and the onset of the GRB—an interval, perhaps, in which the structural integrity of the combined neutron stars briefly resisted the inevitable collapse.

For many researchers the breakthrough has been a long time coming. “My dream has come true,” says Szabolcs Marka, an astrophysicist and LIGO team member at Columbia University who was an early proponent of multimessenger astronomy in the late 1990s. Back then, he recalls, he was seen as “that crazy guy” trying to prepare for follow-up observations on gravitational waves,—a phenomenon that was then still decades away from direct detection. “Now, I and others feel vindicated,” Marka says. “We have studied this system of colliding neutron stars in a very diverse set of messengers. We have seen it in gravitational waves, in gamma rays, in ultraviolet, visible and infrared light, and in x-rays and radio waves. … This is the revolution—the evolution—of astronomy that I first hoped for 20 years ago.”

France Córdova, director of the National Science Foundation, or NSF (the U.S. federal agency that supplied the bulk of LIGO’s funding), calls the observatory’s latest achievement a “historic moment in science” that could not have come without decades of sustained governmental support for a variety of astrophysical observatories. “The detection of gravitational waves, from the first short chirp heard round the world to this latest, longer chirp, not only validates the kind of high-risk, high-reward investments that the NSF makes, but also spurs us to want to do more,” Córdova says. “My hope is that the NSF will continue to support innovators and innovations that will transform knowledge, and inspire many generations to come.”

*The Golden Opportunity*

After the initial detections of the merger’s gravitational waves and its subsequent GRB (the latter of which was immediately observed by the Fermi and Integral space telescopes), the race was on to find the collision’s source—and hopefully its afterglow—in the sky. Within hours multiple teams had marshalled available telescopes to stare at the region where LIGO’s and Virgo’s scientists had calculated the source must be: a swath of the heavens spanning 31 square degrees and containing hundreds of galaxies. (Using LIGO alone, Cadonati says, the search would have been like “looking for the glimmer of a gold ring in the Pacific.” With the addition of a third data point from Virgo, she says, the researchers could properly triangulate the source’s position, reducing the search to something more like seeking “a gold ring somewhere in the Mediterranean.”)

The bulk of the observations took place at observatories in Chile as soon as the sun had set and the crucial region of sky drifted up over the horizon, with different teams adopting an assortment of search strategies. Some simply “tiled” the region with observations, moving methodically from one side to the other; others targeted subsets of galaxies that theories suggested would be most likely to host a neutron star merger. In short order, the targeting strategy won out.

First to actually see the optical afterglow was Charles Kilpatrick, a postdoctoral researcher at the University of California, Santa Cruz. He was was sitting at his desk and sorting through images of selected galaxies at the behest of one of his colleagues at Santa Cruz, the astronomer Ryan Foley, who had helped organized the campaign. In the fourth image he examined, hastily taken and transmitted by colleagues half a world away using the meter-wide SWOPE telescope at the Las Campanas Observatory in Chile, he saw it: a bright blue dot embedded in a giant elliptical galaxy, a 10-billion-year-old swarm of old, red stars about 120 million light years away, nameless save for catalog designations. Such galaxies are thought to be the main cosmic homes for neutron-star mergers due to their advanced age, stellar density and relative lack of recent star formation. A side-by-side comparison of earlier images of that same galaxy showed no such dot; it was something new and recent. “It very slowly dawned on me what a momentous occasion this was,” Kilpatrick recalls, “but I had tunnel vision at the time, just trying to work as quickly as possible.”

Kilpatrick notified other team members including Josh Simon, a Carnegie Observatories astronomer who rapidly obtained a confirmation image with one of the larger 6.5-meter twin Magellan telescopes in Chile. The blue dot was there, too. Over the course of an hours, Simon followed-up by measuring the dot’s spectrum—the various colors of light it emitted—in a pair of five-minute exposures. Those spectra could prove useful for further study, he reasoned, or if nothing else they could serve to ensure the blip was not an ordinary supernova or some other cosmic imposter. Meanwhile, other teams had spotted the dot and were embroiled in follow-ups of their own. The rapid confirmation and spectra from Foley’s team, however, clinched provenance for them. “We had the first image of this, and we have the first identification of the source in this image,” Simon says. “Because we obtained both of those so early, we were also able to get the first spectrum for this merger—which no one else in Chile was able to do that first night—and then we issued the first announcement to the rest of the community.”

Those early spectral observations proved vital for subsequent analysis and solving several mysteries. They showed the merger’s leftovers rapidly cooling, fading from a brilliant sapphire blue to a dim ruby in the sky. These reading were verified over the next few weeks weeks of observation as the visible dot faded, its afterglow shifting and peaking in cooler, longer-wavelength infrared light. The general pattern of colors, cooling and expansion hews close to what was predicted years earlier by a number of theorists working independently of each other, most notably Brian Metzger of Columbia University and Dan Kasen of the University of California, Berkeley.

In short, Metzger explains, what astronomers have seen from the merger’s aftermath is something called a “kilonova”: an intense outburst of luminosity created by the ejection and radioactive decay of white-hot, neutron-rich material from the neutron stars. As the material expands and cools, most of its neutrons are captured by the nuclei of iron and other heavy elements left over as ashes from the neutron star’s formative supernova explosion, forming even heavier elements. “Over the course of about one second, as the ejecta are capturing these neutrons and expanding through space, one of these mergers will form the lower half of the periodic table—gold, platinum, uranium, and so on,” Metzger says. Near its conclusion, the kilonova’s light dramatically shifts to infrared as the neutrons cascading through the ejecta forge the heaviest elements, which efficiently absorb visible light.

Measuring the kilonova’s spectral evolution, in turn, allows astronomers to estimate the amount of different elements it has produced. Edo Berger, who studies kilonovae at the Harvard-Smithsonian Center for Astrophysics and oversaw many of the most ambitious follow-up observations of the merger, the event produced roughly 16,000 Earth masses worth of heavy elements. “That’s everything—gold and platinum and uranium, as well as all the weird ones you see just as letters on the periodic table and don’t know their names,” he says. “As for the breakdown? For that I don’t think we have exact answers yet.”

Some theorists have suggested only a few tens of Earth masses of gold were made in the merger. Metzger, for his part, pegs the merger’s gold output at roughly 100 Earth masses, with about three times more platinum and 10 times less uranium. In any case, when paired with updated statistical estimates of how often these mergers must occur, based in part on the latest detection, “you get a high enough rate per galaxy per year to build up the elements that form our own solar system and the abundances we see in other stars,” Metzger says. “All that stuff we see, you can explain through these mergers. There may be other ways to make heavy elements, but you don’t seem to need them.” On average, he says, probably only one neutron-star merger occurs in the Milky Way every 10,000 years.

*The Far Frontier*

What’s more, studying exactly how a merger’s kilonova evolves can convey crucial information about how the collision unfolded. For instance, the light from this merger’s initial emission was bluer than expected, suggesting to Metzger and others that the kilonova was being viewed at an angle rather than face-on. In this scenario the early blue emission would come from a spherical shell or equatorial band of relatively neutron-poor material blown out from the neutron stars at perhaps 10 percent of lightspeed. The later, redder emission would emerge from very neutron-rich material ejected at two to three times higher speeds from the neutron stars’ poles as they collided, like toothpaste squirted from a tube.

Paired with detailed x-ray and radio observations, this scenario helps explain the curious nature of the gamma-ray burst associated with the merger—the closest GRB ever seen, but also one of the faintest. Short GRBs are thought to be bipolar jets of intense radiation spun up and ejected at nearly lightspeed by churning magnetic fields within colliding neutron stars as they coalesce and collapse into a black hole. Viewed face-on—down the barrel of the GRB gun, so to speak—they are extremely bright. This is the case with the majority of such bursts that astronomers witness in the distant universe. But if they are tilted or inclined from our perspective they would appear rather dim and would only detectable if they were relatively close, within several hundred million light-years.

Using the wealth of data available from multimessenger astronomy, then, astronomers could eventually determine the viewing angles of many kilonovae throughout the observable universe, making each one a more potent marker for measuring large-scale cosmic structure and evolution. This could allow scientists to better confront a mystery arguably deeper than the origin of the heavy elements: the baffling fact that the universe is not merely expanding, but accelerating at an ever-increasing rate under the influence of a kind of cosmos-spanning anti-gravity known as dark energy.

Cosmologists hope to better understand dark energy by precisely measuring its effect upon the universe, tracking objects in ever-more-distant regions of the universe to see how far away they are, and how fast they are moving, caught up in dark energy’s accelerating flow. But to do this they need reliable “standard candles,” objects with known brightness that can be used to calibrate this vast, sweeping view of spacetime. Daniel Holz, an astrophysicist at the University of Chicago, has demonstrated how merging neutron stars could contribute to this effort. His work shows that the strength of this latest merger’s gravitational waves and the emissions of its kilonova can be used to calculate the local universe’s expansion rate. Limited to just one merger the technique yields a value with significant uncertainties, albeit still in the ballpark of the expansion rate obtained from other methods. But in coming years—as gravitational-wave observatories and a new generation of large telescopes on the ground and in space work together to identify hundreds or even thousands of neutron star collisions per year—those estimates will markedly improve.

“What all this means is that the gravitational waves from these mergers measured by LIGO and Virgo are complementary with modeling of kilonovae that suggests their inclination, their viewing angle, by their spectral evolution from blue to red,” says Richard O’Shaughnessy, an astrophysicist and LIGO team member at the Rochester Institute of Technology. “That is a powerful synergy. If we know the inclination we can know the distance, and that helps us with cosmology. What has been done here is a prototype for what we will be doing regularly in the future.”

“If you think about it, the universe is sort of a cosmic particle collider, with neutron stars as the particles,” O’Shaughnessy says. “It throws them together, and we now have the opportunity to see what comes out. We are going to see so many of these in the coming years—how many, I can’t tell you, but people already describe it as a ‘rain.’ This event is a Rosetta Stone, giving us real data to connect disparate threads of astrophysics that previously only existed in the mind of theorists or as bits in a supercomputer simulation. It allows us to understand the cosmic abundance of heavy elements. It allows us to probe the squishiness of nuclear matter at extreme densities. It allows us to measure the expansion of the universe. These synergies set the agenda for all of high-energy astrophysics for decades to come, and are built on decades of investment. We are now reaping the reward, a mountain of gold 10 or a hundred times the mass of the Earth, that the universe just gave us.”


----------



## ae1905

arstechnica.com *

Ophelia became a major hurricane where no storm had before*

Eric Berger - 10/16/2017, 9:14 AM
[HR][/HR] *Look out... Ireland? —*

*"I really can’t believe I’m seeing a major just south of the Azores."*








Enlarge / It's safe to say that as a major hurricane, Ophelia was something of an outlier on Saturday.

Sam Lillo/Twitter

The system formerly known as Hurricane Ophelia is moving into Ireland on Monday, bringing "status red" weather throughout the day to the island. The Irish National Meteorological Service, Met Éireann, has warned that, "Violent and destructive gusts of 120 to 150km/h are forecast countrywide, and in excess of these values in some very exposed and hilly areas. There is a danger to life and property."

Ophelia transitioned from a hurricane to an extra-tropical system on Sunday, but that only marginally diminished its threat to Ireland and the United Kingdom on Monday, before it likely dissipates near Norway on Tuesday. The primary threat from the system was high winds, with heavy rains.

Forecasters marveled at the intensification of Ophelia on Saturday, as it reached Category 3 status on the Saffir-Simpson scale and became a major hurricane. For a storm in the Atlantic basin, this is the farthest east that a major hurricane has been recorded during the satellite era of observations. Additionally, it was the farthest north, at 35.9 degrees north, that an Atlantic major hurricane has existed this late in the year since 1939.

It's not every day WPC has a conference call with the @metoffice & @MetEireann for #Ophelia! Visit @NHC_Atlantic for official track/info! pic.twitter.com/bEG2bNr3It
— NWS WPC (@NWSWPC) October 14, 2017​
"Ophelia is breaking new ground for a major hurricane," National Hurricane Center scientist Eric Blake wrote on Twitter. "Typically those waters much too cool for anything this strong." He also added, "I really can’t believe I’m seeing a major just south of the Azores." Seas near where Ophelia intensified Saturday were 1-2 degrees Celsius above normal.

Ophelia is just the latest in a series of major hurricanes that have brought destruction or set records this year. Hurricane Harvey brought unprecedented flooding to the upper Texas coast, Hurricane Irma ravaged the Caribbean Islands and parts of Florida, and Hurricane Maria brought devesating rains and winds to Puerto Rico, where a majority of the island remains without power nearly a month after landfall.


----------



## ae1905

theconversation.com *

Flowers' secret signal to bees and other amazing nanotechnologies hidden in plants*

Stuart Thompson
[HR][/HR] Flowers have a secret signal that’s specially tailored for bees so they know where to collect nectar. And new research has just given us a greater insight into how this signal works. Nanoscale patterns on the petals reflect light in a way that effectively creates a “blue halo” around the flower that helps attract the bees and encourages pollination.

This fascinating phenomenon shouldn’t come as too much of a surprise to scientists. Plants are actually full of this kind of “nanotechnology”, that enables them to do all kinds of amazing things, from cleaning themselves to generating energy. And, what’s more, by studying these systems we might be able to put them to use in our own technologies.

Most flowers appear colourful because they contain light-absorbing pigments that reflect only certain wavelengths of light. But some flowers also use iridescence, a different type of colour produced when light reflects from microscopically spaced structures or surfaces.

The shifting rainbow colours you can see on a CD are an example of iridescence. It’s caused by interactions between light waves bouncing off the closely spaced microscopic indentations in its surface, which means some colours become more intense at the expense of others. As your viewing angle shifts, the amplified colours change to give the shimmering, morphing colour effect that you see.







Bees can see a blue halo around the purple region. Edwige Moyroud 

Many flowers use grooves between one and two thousandths of a millimetre apart in the wax coating on their surface to produce iridescence in a similar way. But researchers investigating the way that some flowers use iridescence to attract bees to pollinate have noticed something odd. The spacing and alignment of the grooves weren’t quite as perfect as expected. And they weren’t quite perfect in very similar ways in all of the types of flowers that they looked at.
These imperfections meant that instead of giving a rainbow as a CD does, the patterns worked much better for blue and ultra-violet light than other colours, creating what the researchers called a “blue halo”. There was good reason to suspect that this wasn’t a coincidence.

The colour perception of bees is shifted towards the blue end of the spectrum compared to ours. The question was whether the flaws in the wax patterns were “designed” to generate the intense blues, violets and ultra-violets that bees see most strongly. Humans can occasionally see these patterns but they are usually invisible to us against red or yellow pigmented backgrounds that look much darker to bees.

The researchers tested this by training bees to associate sugar with two types of artificial flower. One had petals made using perfectly aligned gratings that gave normal iridescence. The other had flawed arrangements replicating the blue halos from different real flowers.

They found that although the bees learned to associate the iridescent fake flowers with sugar, they learnt better and quicker with the blue halos. 

Fascinatingly, it seems that many different types of flowering plant may have evolved this structure separately, each using nanostructures that give slightly off-kilter iridescence to strengthen their signals to bees.







 Wait a minute! This isn’t a flower. Edwige Moyroud *The lotus effect*

Plants have evolved many ways to use these kind of structures, effectively making them nature’s first nanotechnologists. For example, the waxes that protect the petals and leaves of all plants repel water, a property known as “hydrophobicity”. But in some plants, such as the lotus, this property is enhanced by the shape of the wax coating in a way that effectively makes it self-cleaning.

The wax is arranged in an array of cone-like structures about five thousandths of a millimetre in height. These are in turn coated with fractal patterns of wax at even smaller scales. When water lands on this surface, it can’t stick to it at all and so it forms spherical drops that roll across the leaf picking up dirt along the way until they fall off the edge. This is called “superhydrophobicity” or the “lotus effect”.

*Smart plants*

Inside plants there is another type of nanostructure. As plants take up water from their roots into their cells, the pressure builds inside the cells until it is like being between 50 metres and 100 metres under the sea. In order to contain these pressures, the cells are surrounded by a wall based on bundles of cellulose chains between five and 50 millionths of a millimetre across called microfibrils.

The individual chains are not that strong but once they are formed into microfibrils they become as strong as steel. The microfibrils are then embedded in a matrix of other sugars to form a natural “smart polymer”, a special substance that can alter its properties in order to make the plant to grow.

Humans have always used cellulose as a natural polymer, for example in paper or cotton, but scientists are now developing ways to release individual microfibrils to create new technologies. Because of its strength and lightness, this “nanocellulose” could have a huge range of applications. These include lighter car parts, low calorie food additives, scaffolds for tissue engineering, and perhaps even electronic devices that could be as thin as a sheet of paper.

Perhaps the most astonishing plant nanostructures are the light-harvesting systems that capture light energy for photosynthesis and transfer it to the sites where it can be used. Plants are able to move this energy with an incredible 90% efficiency.

We now have evidence that this is because the exact arrangement of the components of the light-harvesting systems allow them to use quantum physics to test many different ways to move the energy simultaneously and find the most effective. This adds weight to the idea that quantum technology could help provide more efficient solar cells. So when it comes to developing new nanotechnology, it’s worth remembering that plants may have got their first.


----------



## ae1905

blogs.discovermagazine.com 

*How Volcanoes Starved Ancient Egypt*

[HR][/HR] Mount Sinabung, Indonesia. _(Credit: Yosh Ginsu/Unsplash)_

Ancient Egypt was the most powerful civilization in the world for a time. The monuments built by laborers to honor pharaohs stand to this day, testament to the vast resources at their command.

But the architectural excess hid a crippling weakness. Egypt sits in the middle of a vast desert. To support a population that numbered in the millions, large-scale agriculture was vital, and for that you need water, and therefore, the Nile. The river was so important to the Egyptians that they still celebrate a two-week long festival during the yearly floods. It was thought to be fed by the tears of Isis. Even small fluctuations in flood levels could bring famine or catastrophe.

*Liquid Gold*

Ancient Egyptian society saw its fair share of uprisings, revolts and conquests, but a new paper hints that a surprising force may have been meddling in the affairs of the time. The nefarious agent? Volcanoes, say researchers from Yale University in a new paper in _Nature Communications_. Large eruptions can cause small but critical changes in rainfall around the headwaters of the Nile, something they found lined up with periods of revolt and instability in ancient Egypt.

The researchers relied on a combination of ancient records and modern techniques to divine the weather thousands of years ago. Papyrus  scrolls from the Ptolemaic era around 300 BC provided insights into periods of social unrest and drought, and they combined those with an analyses of ice cores taken from Greenland and Antarctica. The plumes of sulfur that volcanic eruptions spew into the air leave a distinct trace in the ice, forming a record of when major volcanic eruptions occurred.
A Nilometer in Cairo. _(Credit: Baldiri/Wikimedia Commons)_

The sulfur also serves to cool the planet by reflecting sunlight, and this likely starved the Nile of rainwater during the monsoon season by shifting weather patterns, leading to parched fields come summer. Readings from Nilometers, ancient observatories on the Nile that tracked yearly water levels, confirmed reduced flooding during these times, depriving the Egyptians of their main food source. The Egyptians relied on an elaborate system of dams and canals to inundate their fields, bringing in silt to serve as fertilizer and water to keep crops alive. If the waters failed to crest high enough, the fields remained dry and food production dwindled.

*Volcanic Impact*

This translated to real consequences. By tracking records of priestly decrees, revolts and land sales during these times, the researchers found a marked increase during years when the floods failed to deliver. A famous military campaign cut short line dup with unrest at home, as did a 20 year long uprising during the Ptolemaic era. This means famine and bloodshed, and the death and destruction they bring.

Eruptions may have even played a role in the fall of the Ptolemaic dynasty in 30 BC, they say, when a Roman invasion swept through the country.

Saying that volcanoes toppled the Egyptians is obviously untrue — we can blame Gaius Octavius for that. The vagaries of the climate can have very real effects on people’s lives however, especially when those people are part of a populous nation perched near the only fresh water source for hundreds of miles around. An ill-timed eruption could conceivably tip the scales.

The lesson remains meaningful today. Some 70 percent of the world’s population today depends in some way on monsoons. Shifting the pattern of rainfall that people have spent tens or hundreds of years living with and adapting to can cause real harm, whether you’re in Bangladesh or Houston. It needn’t be a massive eruption, either. Climate change is altering weather conditions all across the globe at a rate much quicker than many can adapt to.

When we assess how a changing planet could affect us, let’s take a lesson from the Egyptians.


----------



## ae1905

engadget.com *

The way scientific units are calculated is changing*

2-3 minutes
[HR][/HR] Ever pondered the precision of the international system of units (SI)? (Why should you? You're not going to be called on to measure the temperature in the Large Hadron Collider any time soon). You may be in need of a refresher, then. The kilogram is defined as the lump of platinum-iridium locked in a vault in Paris. The artefact is known to fluctuate in weight (due to surface contamination), making it tricky to define its exact mass.

But, it made the cut for its inclusion in the broader redefinition of units with the acceptance of the so-called watt balance method in 2015. This approach essentially compares mechanical power with electromagnetic power using two methods -- which measure speed as well as experimental values relating the voltage and current in Planck's constant.

An ampere (the base unit of electric current, often shortened to "amp") is presently defined by an imaginary experiment involving the force between two infinite wires. In the near future, the unit could be measured using an electron pump. Meanwhile, the mole is the unit for the amount of substance in a system with as many elementary entities as there are atoms in 0.012 kilograms of carbon-12. In just a couple of years, it could be defined using the silicon sphere (the device that gives scientists Avogadro's constant).

Finally, the Kelvin -- the base unit for temperature -- relates to little more than water: The triple point of water to be exact. The redefinition would rely on the Boltzmann constant, which scientists measured using a dielectric-constant gas thermometer. By grounding the SI on an invariable foundation of constants, scientists should be able to pin down their definitions for good. Roll on, 2019.


----------



## ae1905

engadget.com *

NASA study will help identify potentially habitable planets*

[HR][/HR] NASA has already found tons of exoplanets around nearby stars, and will spot countless more once the James Webb Space Telescope (JWST) launches. The problem is that scientists aren't exactly sure which planet-star combinations are most likely to support life. A new NASA study has found that planets orbiting small stars like Trappist-1 could retain their oceans for billions of years, even if they're quite close -- provided the star emits just the right amount of infrared radiation.

For the foreseeable future, astronomers will be scanning red dwarf stars for habitable planets, rather than other types like our sun. That's because they're easier to find and small enough that the wobble of small, Earth-like planets is detectable. On top of that, the amount of light dip is noticeable when a planet passes in front, and scientists can detect the composition of its atmosphere based on how much starlight it absorbs.

Because of that, scientists are obviously very concerned about which red dwarf stars and planets can support life. That's where the new study, done by a team from NASA's Goddard Institute for Space Studies and the Earth-Life Science Institute at the Tokyo Institute of Technology, comes in.

If a planet is too cold, any water will freeze into ice, making life formation challenging. If it's too hot, water will evaporate and rise up into the stratosphere, where it will get broken into hydrogen and oxygen by the star's UV (ultraviolet) light. The latter state, called a "moist greenhouse," eventually leads to the loss of all oceans, killing any chances for life.








Artist's conception of the Trappist-1 system (NASA)

Unlike on Earth, planets on red dwarf systems are often tidally locked, with  the same side always pointing toward the star. That leads to extreme heating on one side and cooling on the other, but luckily, such planets zip around their stars quick enough to create a circulating atmosphere. That atmosphere can be enough to keep the planet at the right temperature for liquid water, while blocking it from evaporating into the stratosphere.

Using a new, advanced 3D atmospheric model, the NASA and Tokyo-based researchers simulated the atmospheric circulation on a planet in a hypothetical red dwarf system. "We found an important role for the type of radiation a star emits and the effect it has on the atmospheric circulation of an exoplanet in making the moist greenhouse state," Fujii said.

Until now, scientists figured that if a planet's surface was too warm, around 150 degrees F, it would create an ocean-destroying moist greenhouse state. 

However, the team found that on red dwarf, Trappist-1 type planets, that wasn't necessarily the case. If a star emitted enough near-infrared radiation, it could kick off a moist greenhouse effect, even at temperatures around those at the Earth's topics. 

However the model showed that, surprisingly, if an exoplanet was closer to its parent star, the infrared heating would increase moisture in the atmosphere more gradually. That means that, contrary to findings from previous models, it could remain habitable.

If the study proves valid, it will help narrow down habitable exoplanet candidates. Scientists can first measure the radiation of a star, knowing that cooler stars emit more near-infrared radiation. Then, if possible, they could measure its planet's atmospheric composition using spectroscopic methods. Those methods mostly target a planet's stratosphere, so the presence of water -- unlike what you might think -- could be negative for life.

"As long as we know the temperature of the star, we can estimate whether planets close to their stars have the potential to be in the moist greenhouse state," said co-author Anthony Del Genio from NASA. "If there is enough water to be detected, it probably means that planet is in the moist greenhouse state." If so, the planet is likely shedding water quickly -- so the oceans, and any potential life in them, could be doomed.


----------



## ae1905

scientificamerican.com *

Oceans Can Rise in Sudden Bursts*

Chelsea Harvey,E&E News
[HR][/HR] The threat of sea-level rise remains one of the greatest global concerns about climate change, and scientists are still improving their predictions of how much — and how quickly — the world's oceans may rise. To help answer those questions about the future, some researchers are looking into the past.
New research has provided one of the most detailed looks yet into the patterns of sea-level rise that occurred during the world's last major warming period, more than 10,000 years ago. The study, published yesterday in _Nature Communications_, suggests that during this time water rose rapidly, in punctuated bursts, rather than gradually over time. It was likely driven by uneven pulses of meltwater from the world's collapsing glaciers.

The researchers suggest these past events could be viewed as a kind of “analog” for the future — a warning of the events that could yet come under future climate change.

“It's not exactly the same situation,” acknowledged André Droxler, a professor of marine geology at Rice University and one of the study's authors, in an interview with E&E News. Present-day warming is being driven not by natural processes, but by carbon emissions from large-scale burning of fossil fuels, an unprecedented event in the Earth's history.

But the researchers suggest there may be similarities between the collapse of ice sheets thousands of years ago and the destabilization of the world's ice sheets in the future.

“We still have plenty of ice volume to be melted,” Droxler said. “We know that the Greenland ice sheet is melting, the western Antarctic ice sheet is melting. And so I think our study, during this time of well-established global warming, could become a great analog for where we are living and where we will be living the next few centuries.”

In order to look back so many thousands of years into the past, the researchers turned to a surprising source of information: fossilized coral reefs, located just off the coast of Texas.

The area is known for its beautiful living reefs, including the Flower Garden Banks National Marine Sanctuary. But the corals that interested the researchers have been dead for more than 10,000 years, drowned and now submerged nearly 200 feet below the surface of the water.

Fossilized corals can contain all kinds of useful scientific information about the ancient world. Sampling their preserved bodies can yield data on past temperatures and ocean chemistry — and because many coral species can only grow at certain depths, typically close to the surface, the location of their remains can tell scientists what the water levels there used to be like.

With this in mind, Droxler and a group of colleagues, including lead study author Pankaj Khanna, a Ph.D. candidate at Rice University, set off on the Schmidt Ocean Institute's research ship, the Falkor, to investigate the site. They used a special sonar system to create high-resolution 3-D maps of the dead reefs on the seafloor — and in the process, they discovered an intriguing pattern.

The fossil reefs were arranged along the seafloor in a series of six stairlike shelves, or terraces — a classic signal of past sea-level rise. As water levels rise, corals must scramble backward toward the retreating shoreline in order to stay close enough to the surface to survive. In the process, they produce a kind of skeletal, vertical shelf and then grow to fill in the space closer to the shoreline behind it. That forms a coral terrace. There they remain until the next period of rapid sea-level rise, when they produce another shelf and move backward again.
The next step was to figure out how old the terraces are. To do so, researchers matched the terrace depths to a global sea-level curve, a kind of general estimate of sea-level changes throughout geological history.

The results suggested the terraces had arisen during a period between 12,000 and 14,000 years ago, before the rate of sea-level rise finally overcame them and they died. This time coincides with a warming period following the end of the last ice age, when massive amounts of ice from the world's glaciers melted and poured into the sea.

Scientists already knew this was a time marked by significant sea-level rise. But the existence of the terraces suggests, for the first time, that the process was not gradual, but rather occurred in sharp, sudden bursts. In fact, the terraces suggest several meters of sea-level rise may have occurred on the scale of just decades during this time.

“The study is important because people had not thought of this before, that these kinds of small-scale events would be so common during a warming world,” said Khanna, the study's lead author.

The researchers suspected these spurts of sea-level rise were driven by pulses of meltwater from destabilizing glaciers. So they compared their estimates to an existing record of the ancient climate in Greenland, which was created using ice core samples from the ice sheet.

Even while the ice age was ending and the planet was heating up, the ancient climate was still experiencing fluctuations in temperature — and the researchers found that all but one terrace corresponded with warm periods, which were likely accompanied by sudden influxes of meltwater into the sea.

“This one exception is probably because our timing is not perfect,” Droxler acknowledged. In the future, the scientists hope they may be able to conduct another expedition to drill physical samples from the dead corals, which will help them come up with more precise dates for each of the terraces.
In the meantime, the conclusions represent “a kind of interesting and new idea,” according to Andrea Dutton, a geology professor and paleoclimate expert at the University of Florida who was not involved with the new research.

“We've known for quite some time that there was at least one very rapid pulse of sea-level rise during the deglaciation between the last ice age and the present — we've known for a very long time that sea level can rise with a sudden jump related to the dynamics of the ice sheet,” she said. “What's new here is you can have a bunch of kind of smaller pulses that are spaced really closely together, and that's kind of the new idea that they're putting forward.”

 And it's an idea that could hold dramatic implications for the planet's warming future. Scientists are still investigating the physical processes affecting the melting of the world's current ice sheets, and there's still a great deal of uncertainty about in what ways, and how quickly, they might react to future warming. But the idea that many rapid bursts of sea-level rise have occurred in the past, and could occur again, could be critical information for coastal communities as they plan for the future.

For now, scientists are placing increasing emphasis on the study of both the Greenland and Antarctic ice sheets to better understand the physical processes affecting the melting of glaciers. As Dutton pointed out, a great deal of the ice that contributed to sea-level rise at the end of the last ice age — much of which existed in the midlatitudes — has now completely melted away, and it's unclear whether the world's remaining ice sheets will respond to human-caused climate change in the same ways.

“While we're not sure, it kind of opens up the door to this possibility that as ice sheets retreat quickly, maybe they do so in a stepwise fashion,” she said. “But it's not clear that the mechanisms will be exactly the same between those two ice sheets, and that's something to be determined.”

_Reprinted from Climatewire with permission from E&E News. E&E provides daily coverage of essential energy and environmental news at www.eenews.net._


----------



## ae1905

blogs.scientificamerican.com 

*The U.S. Is Retreating from Religion*

Allen Downey
[HR][/HR] By 2030, say projections, a third of Americans will have no religious preference 









_Credit: Luis Mariano González Getty Images_ 

Since 1990, the fraction of Americans with no religious affiliation has nearly tripled, from about 8 percent to 22 percent. Over the next 20 years, this trend will accelerate: by 2020, there will be more of these "Nones" than Catholics, and by 2035, they will outnumber Protestants.

The following figure shows changes since 1972 and these predictions, based on data from the General Social Survey (GSS):







Allen Downey The GSS, which surveys 1,000–2,000 adults in the U.S. per year, includes questions related to religious beliefs and attitudes. Regarding religious affiliation, it asks “What is your religious preference: is it Protestant, Catholic, Jewish, some other religion, or no religion?”

In the figure, the dark lines show the fraction of respondents in each group for each year of the survey until 2016. The shaded areas show predictions, based on a statistical model of the relationship between year of birth, age, and religion.
Religious beliefs are primarily determined by the environment people grow up in, including their family life and wider social influences. Although some people change religious affiliation later in life, most do not, so changes in the population are largely due to generational replacement.

We can get a better view of generational changes if we group people by their year of birth, which captures information about the environment they grew up in, including the probability that they were raised in a religious tradition and their likely exposure to people of other religions. The following figure shows the share of people in each religious group, for birth years from 1880 to 1995:







Allen Downey Among people born before 1940, a large majority are Protestant, only 20–25 percent are Catholic, and very few are Nones or Others. But these numbers have changed rapidly in the last few generations: among people born since 1980, there are more Nones than Catholics, and among the youngest adults, there may already be more Nones than Protestants.
However, this view of the data does not show the effect of age. If religious affiliation increases or decreases, on average, as people get older, this figure could be misleading.

Fortunately, with observations over more than 40 years, the design of the GSS makes it possible to build a statistical model that estimates the effects of birth year and age separately. Then we can use the model to generate predictions, by simulating the results of future surveys. The details of this methodology are in a longer version of this article (see links below).

*Are these predictions credible?
*
Social changes are generally unpredictable. At any point another "Great Awakening" could reverse these trends. But among social changes, demographic predictions are relatively safe; the events they predict have, in some sense, already happened. The people who will turn 40 years old in 2037 are turning 20 this year, and we already have data about them. The people who will turn 20 in 2037 have been or soon will be born. So these predictions will only be wrong if current teenagers are more religious than people in their 20s, or if current children are being raised in a more religious environment. But in both cases, the opposite is more likely to be true.

In fact, there are reasons to think these predictions are conservative:



Survey results like these are subject to social desirability bias, which is the tendency of respondents to shade their answers in the direction they think is more socially acceptable. To the degree that apostasy is stigmatized, we expect these reports to underestimate the number of Nones. As the visibility of nonreligious people increases, they might be more willing to be counted; in that case, the trends would go faster than predicted. 
The trends for Protestants and Nones have apparent points of inflection near 1990. Predictions that include earlier data are likely to underestimate future trends. If we use only data since 1990 to generate predictions, we expect the fraction of Nones to exceed 40 percent within 20 years. 
 
A longer version of this article is available from my blog, “Probably Overthinking It.” It applies the same methods to predict changes in other aspects of religion: belief in God, interpretation of the Bible, and confidence in the people who run religious organizations.

The data I used and all of my code are available in this Jupyter notebook.

The views expressed are those of the author(s) and are not necessarily those of Scientific American. 

Allen Downey
Allen Downey is a Professor of Computer Science at Olin College in Needham MA. He is the author of "Think Python," "Think DSP" and other books that use Python to explore topics in engineering and data science.


----------



## ae1905

blogs.discovermagazine.com *

Beluga Living with Dolphins Swaps Her Calls for Theirs*

[HR][/HR] 
In November 2013, a four-year-old captive beluga whale moved to a new home. She had been living in a facility with other belugas. But in her new pool, the Koktebel dolphinarium in Crimea, her only companions were dolphins. The whale adapted quickly: she started imitating the unique whistles of the dolphins, and stopped making a signature beluga call altogether.

“The first appearance of the beluga in the dolphinarium caused a fright in the dolphins,” write Elena Panova and Alexandr Agafonov of the Russian Academy of Sciences in Moscow. The bottlenose dolphins included one adult male, two adult females and a young female. But the animals soon got along, er, swimmingly. In August 2016, one of the adult female dolphins gave birth to a calf that regularly swam alongside the beluga.

The researchers were curious about what the new pool-mates were saying to each other. Dolphins are famously chatty animals. Their sounds include echolocation clicks and “signature whistles,” calls that are unique to each dolphin, kind of like names. Belugas, though, are vocal virtuosos. In addition to their rich repertoire of squeaks, squeals, and other calls, they can imitate other animals and people. One captive beluga developed such a good impression of human speech that it fooled a person diving in its tank. (Here’s another link to the audio.)

Panova and Agafonov have been studying the acoustic communications of animals in the dolphinarium since 2010. Immediately after their beluga arrived, they made sound recordings of the whole group swimming together. Two months later, they led the beluga into a separate pool for a few dozen brief recording sessions. They made more recordings nine months after that, for a total of more than 90 hours of audio.

In the beluga’s first days in the dolphin pool, she gave “calls typical for her species,” Panova and Agafonov write. She made squeaks, vowel-like calls, and particular two-toned sounds that seemed to be her “contact calls.” Similar to dolphins’ signature whistles, these are the sounds belugas make to check in with others in their group. Mother and baby belugas use contact calls to keep track of each other, as do belugas that are friends or relatives.

But at her two-month recording session, the beluga was performing some new numbers. She still made her own whistles and vowel sounds, but she’d added calls that resembled the signature whistles of the three adult dolphins in her group. She also made whistles that all the dolphins shared. And she seemed to have dropped her beluga contact calls altogether.

At her later recording session, the beluga’s repertoire was unchanged. Panova and Agafonov say it’s “disappointing” that they didn’t capture earlier recordings of the beluga on her own, because they might have discovered her imitating the dolphin whistles even sooner than two months. In another study, they write, an adult beluga imitated a sound the first time it was played.

Panova points out that while other studies have found belugas imitating sounds such as human speech, birdsong, and computer-generated noises, this beluga is imitating sounds that could actually help her communicate with the animals around her. The beluga, finding herself alone, may have been especially motivated to join the dolphins’ social group. “This case may be an interesting example of interspecies communication,” Panova says.

Image: Shutterstock/Andrii Zhezhera


----------



## ae1905

cosmosmagazine.com *

Universe shouldn’t exist, CERN physicists conclude*
[HR][/HR] One of the great mysteries of modern physics is why antimatter did not destroy the universe at the beginning of time.

To explain it, physicists suppose there must be some difference between matter and antimatter – apart from electric charge. Whatever that difference is, it’s not in their magnetism, it seems.

Physicists at CERN in Switzerland have made the most precise measurement ever of the magnetic moment of an anti-proton – a number that measures how a particle reacts to magnetic force – and found it to be exactly the same as that of the proton but with opposite sign. The work is described in _Nature_. 

“All of our observations find a complete symmetry between matter and antimatter, which is why the universe should not actually exist,” says Christian Smorra, a physicist at CERN’s Baryon–Antibaryon Symmetry Experiment (BASE) collaboration. “An asymmetry must exist here somewhere but we simply do not understand where the difference is.”

Antimatter is notoriously unstable – any contact with regular matter and it annihilates in a burst of pure energy that is the most efficient reaction known to physics. That’s why it was chosen as the fuel to power the starship _Enterprise_ in _Star Trek_.

The standard model predicts the Big Bang should have produced equal amounts of matter and antimatter – but that’s a combustive mixture that would have annihilated itself, leaving nothing behind to make galaxies or planets or people.

To explain the mystery, physicists have been playing spot the difference between matter and antimatter – searching for some discrepancy that might explain why matter came to dominate. 

So far they’ve performed extremely precise measurements for all sort of properties: mass, electric charge and so on, but no difference has yet been found.

Last year, scientists at CERN’s Antihydrogen Laser PHysics Apparatus (ALPHA) experiment probed an atom of anti-hydrogen with light for the first time, again finding no difference when compared with an atom of hydrogen.

But one property was known only to middling accuracy compared to the others – the magnetic moment of the antiproton. 

Ten years ago, Stefan Ulmer and his team at BASE collaboration set themselves the task of trying to measure it.

First they had to develop a way to directly measure the magnetic moment of the regular proton. They did this by trapping individual protons in a magnetic field, and driving quantum jumps in its spin using another magnetic field. This measurement was itself a groundbreaking achievement reported in _Nature_ in 2014.

Next, they had to perform the same measurement on antiprotons – a task made doubly difficult by the fact that antiprotons will immediately annihilate on contact with any matter.

To do it, the team used the coldest and longest-lived antimatter ever created. 

After creating the antiprotons in 2015, the team were able to store them for more than a year inside a special chamber about the size and shape of a can of Pringles.

Since no physical container can hold antimatter, physicists use magnetic and electric fields to contain the material in devices called Penning traps.

Usually the antimatter lifetime is limited by imperfections in the traps – little instabilities allow the antimatter to leak through.

But by using a combination of two traps, the BASE team made the most perfect antimatter chamber ever – holding the antiprotons for 405 days. 

This stable storage allowed them to run their magnetic moment measurement on the antiprotons. The result gave a value for the antiproton magnetic moment of −2.7928473441 μ[SUB]N[/SUB]. (μ[SUB]N[/SUB] is a constant called the nuclear magneton.) Apart from the minus sign, this is identical to the previous measurement for the proton.

The new measurement is precise to nine significant digits, the equivalent of measuring the circumference of the Earth to within a few centimeters, and 350 times more precise than any previous measurement. 

“This result is the culmination of many years of continuous research and development, and the successful completion of one of the most difficult measurements ever performed in a Penning trap instrument,” says Ulmer. 

The universe’s greatest game of spot the difference goes on. The next hotly anticipated experiment is over at ALPHA, where CERN scientists are studying the effect of gravity of antimatter – trying to answer the question of whether antimatter might fall ‘up’.


----------



## ae1905

scientificamerican.com *

NASA "Twins Study" Shows How Spaceflight Changes Gene Expression*

Mike Wall,SPACE.com
[HR][/HR] The changes spaceflight induces in astronauts are much more than skin deep.

Space travel strongly affects the way genes are expressed, or turned on and off, preliminary results from NASA's "Twins Study" have revealed.

"Some of the most exciting things that we've seen from looking at gene expression in space is that we really see an explosion, like fireworks taking off, as soon as the human body gets into space," Twins Study principal investigator Chris Mason said in a statement. [The Human Body in Space: 6 Weird Facts]

"With this study, we've seen thousands and thousands of genes change how they are turned on and turned off," added Mason, who's based at Weill Cornell Medicine, Cornell University's medical school. "This happens as soon as an astronaut gets into space, and some of the activity persists temporarily upon return to Earth."

Specifically, Mason and his team found an increase in methylation, which involves slapping methyl groups onto stretches of DNA. This process commonly inhibits activation of the genes involved. (A methyl group consists of a carbon atom bonded to three hydrogen atoms.)

The Twins Study centers on former NASA astronauts Scott and Mark Kelly, who are identical twins and therefore share a DNA profile.

Scott Kelly and cosmonaut Mikhail Kornienko lived aboard the International Space Station (ISS) from March 2015 through March 2016, completing an unprecedented 11-month mission. (Most stints aboard the orbiting lab last five to six months.) Mark Kelly stayed on Earth the entire time, serving as a control against which to measure the changes that spaceflight may have induced in Scott.

Researchers are still assessing such changes, across the 10 separate investigations that constitute the broader Twins Study. Final results are expected to be published next year, NASA officials said.

"This study represents one of the most comprehensive views of human biology," Mason said. "It really sets the bedrock for understanding molecular risks for space travel as well as ways to potentially protect and fix those genetic changes."

Spaceflight also causes changes to astronauts' bodies on the macro level, including muscle atrophy, decreased bone density and visual deterioration. 

Scientists have long known about such effects, and astronauts already take measures to mitigate some of them. For example, vigorous exercise is a part of every crewmember's daily routine aboard the ISS, as a way to combat bone and muscle wasting.

NASA is keen to better understand all of the physiological and psychological impacts of spaceflight, so it can better prepare for crewed missions to Mars and other distant destinations, agency officials have said.

*EDITOR'S RECOMMENDATIONS*



One Year in Space: Epic Space Station Mission in Photos 
How the Epic One-Year Space Station Mission Works (Infographic) 
How 1-Year Space Mission Affected Astronaut Twin Scott Kelly: Early Results 
 _
Copyright 2017 __SPACE.com__, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed._


----------



## ae1905

theconversation.com *

We discovered this dinosaur had stripes – and that tells us a lot about how it lived*

Fiann Smithwick
[HR][/HR] Working out what colour dinosaurs were was once thought impossible. But recent discoveries about how colour-producing pigments are preserved in fossils has allowed palaeontologists to reconstruct some dinosaurs’ colour patterns. And by better understanding what dinosaurs looked like, we can learn more about their behaviour and the environments they lived in.

My colleagues and I have been studying the colour patterns of a small, feathered, meat-eating dinosaur known as _Sinosauropteryx_ from the Early Cretaceous period in what is now China. By mapping out the dark pigmented plumage across the body, we found evidence of colour patterns associated with camouflage in living animals today. This included countershading (a dark back and light underside), a striped tail and a “bandit mask” stripe running across its eyes.

It’s a good reminder that we need to rethink the popular image of dinosaurs as solid green or brown giant scaly lizards. What’s more, this evidence could encourage us to change our view of the environment _Sinosauropteryx_ was living in almost 130m years ago.







Plotting the colours. Cell/University of Bristol

The only elements of a feather preserved in most fossils are the structures that originally contained pigment, known as melanosomes, while the keratin that forms the structure of the feather decays=. By identifying the types of melanosomes, you can work out the possible original colour of the feathers. Previous work on the melanosomes of _Sinosauropteryx_ suggested the dark areas of the fossil were a rusty brown or ginger colour when the animal was alive. In other cases, scientists have shown that some avian (bird ancestor) dinosaurs had mottled and even iridescent plumage.

White feathers don’t have pigment and so aren’t preserved in fossils. That means any apparent gaps in the fossilised plumage were most likely originally covered in white feathers. Using this principle, we mapped out the dark and light areas of _Sinosauropteryx_ to create an overall picture of the dinosaur’s colour pattern.

The bandit mask and stripy tail can tell us about the life of _Sinosauropteryx_ by comparing them with the colours of modern animals. As descendants of theropod dinosaurs, birds are the best example for this. They often have facial stripes to hide their eyes, which are key visual cues used by both predators and prey to detect would-be attackers or a potential meal. Eye stripes also reduce glare and so allow animals to see better in bright light.

*Disruptive camouflage*

Stripy tails are less well understood in living animals but they can also serve as a form of “disruptive” camouflage, breaking up the outline of a body part and making it less obvious. It might also be a form of distraction, making the tail more obvious and drawing the attention of predators from the more important body and head.

We know that early tyrannosauroids (forebears to the mighty _Tyrannosaurus rex_) lived at the same time as _Sinosauropteryx_ and may well have hunted the diminutive dinosaur. Direct evidence, in the form of a complete animal in the stomach of one fossil, also shows that _Sinosauropteryx_ hunted small lizards. Vision was crucial to these hunting and hunted dinosaurs, so it is not surprising that we see camouflage patterns evolving at the time.

We can also make some important judgements based on _Sinosauropteryx_‘s countershading. This is one of the most common colour patterns seen in living animals and helps them to hide by both blending into the background and by making the body look less 3D. The animal’s lighter underside counter-balances the shadows that its body casts across it.

Importantly, different environments have different light conditions – so how the pigment gradient should appear varies with habitat. Animals living in open habitats (such as savannahs) often display a sharp contrast between their dark and light patches high on the body, while those living in closed habitats (think forests) generally have a lower and more gradual colour gradient.







Stripy bandit. Cell/University of Bristol We made 3D models of the body of _Sinosauropteryx_ and put them in different virtual habitats to see how the shadows on the body would have looked according to these principles. The actual colour pattern of the fossils, which went from dark to light high up on the body, would have been best suited to counterbalance the shadows from an open habitat with lots of light.

This contrasts to previous work on the colour of another dinosaur from the same location, _Psittacosaurus_, which suggested it had evolved to suit a closed or forested habitat. This suggests the environment at the time was more varied than previously assumed. 

By reconstructing the colour patterns of _Sinosauropteryx_, we have gained unique insights into how and where it may have lived. This has helped to build one of the most informed and accurate pictures of a dinosaur ever reconstructed.


----------



## ae1905

The world's largest telescope will unlock the universe's oldest secrets Bigger really is better, when it comes to astronomy. 


https://www.engadget.com/2017/11/03/GMT-worlds-largest-telescope/


----------



## The red spirit

ae1905 said:


> theconversation.com *
> 
> We discovered this dinosaur had stripes – and that tells us a lot about how it lived*
> 
> Fiann Smithwick
> [HR][/HR] Working out what colour dinosaurs were was once thought impossible. But recent discoveries about how colour-producing pigments are preserved in fossils has allowed palaeontologists to reconstruct some dinosaurs’ colour patterns. And by better understanding what dinosaurs looked like, we can learn more about their behaviour and the environments they lived in.
> 
> My colleagues and I have been studying the colour patterns of a small, feathered, meat-eating dinosaur known as _Sinosauropteryx_ from the Early Cretaceous period in what is now China. By mapping out the dark pigmented plumage across the body, we found evidence of colour patterns associated with camouflage in living animals today. This included countershading (a dark back and light underside), a striped tail and a “bandit mask” stripe running across its eyes.
> 
> It’s a good reminder that we need to rethink the popular image of dinosaurs as solid green or brown giant scaly lizards. What’s more, this evidence could encourage us to change our view of the environment _Sinosauropteryx_ was living in almost 130m years ago.
> 
> 
> 
> 
> 
> 
> 
> Plotting the colours. Cell/University of Bristol
> 
> The only elements of a feather preserved in most fossils are the structures that originally contained pigment, known as melanosomes, while the keratin that forms the structure of the feather decays=. By identifying the types of melanosomes, you can work out the possible original colour of the feathers. Previous work on the melanosomes of _Sinosauropteryx_ suggested the dark areas of the fossil were a rusty brown or ginger colour when the animal was alive. In other cases, scientists have shown that some avian (bird ancestor) dinosaurs had mottled and even iridescent plumage.
> 
> White feathers don’t have pigment and so aren’t preserved in fossils. That means any apparent gaps in the fossilised plumage were most likely originally covered in white feathers. Using this principle, we mapped out the dark and light areas of _Sinosauropteryx_ to create an overall picture of the dinosaur’s colour pattern.
> 
> The bandit mask and stripy tail can tell us about the life of _Sinosauropteryx_ by comparing them with the colours of modern animals. As descendants of theropod dinosaurs, birds are the best example for this. They often have facial stripes to hide their eyes, which are key visual cues used by both predators and prey to detect would-be attackers or a potential meal. Eye stripes also reduce glare and so allow animals to see better in bright light.
> 
> *Disruptive camouflage*
> 
> Stripy tails are less well understood in living animals but they can also serve as a form of “disruptive” camouflage, breaking up the outline of a body part and making it less obvious. It might also be a form of distraction, making the tail more obvious and drawing the attention of predators from the more important body and head.
> 
> We know that early tyrannosauroids (forebears to the mighty _Tyrannosaurus rex_) lived at the same time as _Sinosauropteryx_ and may well have hunted the diminutive dinosaur. Direct evidence, in the form of a complete animal in the stomach of one fossil, also shows that _Sinosauropteryx_ hunted small lizards. Vision was crucial to these hunting and hunted dinosaurs, so it is not surprising that we see camouflage patterns evolving at the time.
> 
> We can also make some important judgements based on _Sinosauropteryx_‘s countershading. This is one of the most common colour patterns seen in living animals and helps them to hide by both blending into the background and by making the body look less 3D. The animal’s lighter underside counter-balances the shadows that its body casts across it.
> 
> Importantly, different environments have different light conditions – so how the pigment gradient should appear varies with habitat. Animals living in open habitats (such as savannahs) often display a sharp contrast between their dark and light patches high on the body, while those living in closed habitats (think forests) generally have a lower and more gradual colour gradient.
> 
> 
> 
> 
> 
> 
> 
> Stripy bandit. Cell/University of Bristol We made 3D models of the body of _Sinosauropteryx_ and put them in different virtual habitats to see how the shadows on the body would have looked according to these principles. The actual colour pattern of the fossils, which went from dark to light high up on the body, would have been best suited to counterbalance the shadows from an open habitat with lots of light.
> 
> This contrasts to previous work on the colour of another dinosaur from the same location, _Psittacosaurus_, which suggested it had evolved to suit a closed or forested habitat. This suggests the environment at the time was more varied than previously assumed.
> 
> By reconstructing the colour patterns of _Sinosauropteryx_, we have gained unique insights into how and where it may have lived. This has helped to build one of the most informed and accurate pictures of a dinosaur ever reconstructed.


So, Adidas is very old then.


----------



## ae1905

blogs.discovermagazine.com *

Discovered: A (Theoretical) Fusion Technique 8 Times Stronger Than One In H-Bomb*
[HR][/HR] CERN, which houses the Large Hadron Collider. _(Credit: Dominionart/Shutterstock)_

When hydrogen atoms fuse together, they release a vast amount of energy. That’s the principle that makes hydrogen bombs so frighteningly powerful, and it’s part of what powers our sun as well. Now, researchers from the Large Hadron Collider (LHC) say they’ve uncovered a kind of theoretical particle fusion that’s almost eight times more energetic than the fusion of two hydrogen atoms.

The discovery, reported in _Nature_ this week, came during the course of an experiment aimed at making a doubly charmed baryon. That’s some heady physics-speak, but baryons are just a class of sub-atomic particle — both protons and neutrons are baryons — and the “charmed” moniker simply refers to the kind of quarks — the tiny particles that comprise larger ones like protons and neutrons — it’s made out of. So, a doubly-charmed baryon is a particle made from two charm quarks and one up quark. Got it? 
*Zoom, Crash*
Researchers are continuously running experiments with the LHC to see what kinds of particles they can create by smashing atoms into one another. When the atoms go fast enough, they’re broken apart by the collision, and sometimes the energy involved is enough to force particles together into new configurations. These new particles let the researchers test assumptions about their grand theory of physics, called the “Standard Model,” which describes how every particle in the universe interacts with each other.

When observing their new, doubly-charmed baryon, researchers from the University of Chicago and Tel Aviv University found that it took a lot of energy to force the two charm quarks together, about 130 megaelectronvolts (MeV). There’s a payoff for that effort though, because the process of fusion ends up producing even more energy, for a net release of 12 MeV for the two charm quarks. That’s only about two-thirds of what we get from normal hydrogen fusion, but when the researchers extrapolated that reaction to another kind of quark, the much heavier bottom quark, those numbers went way up.

Theoretically, fusing two bottom quarks takes about 230 MeV, but the payoff is exponentially larger, around 138 MeV. That’s almost eight times as much as hydrogen fusion, making the explosive result that much bigger.

*That’s a Lot!*

The largest hydrogen fusion bomb ever tested was the Russian Tsar Bomba, which gave off about 50 megatons (or 50 million tons) of TNT worth of energy. The Nagasaki-leveling “Fat Man” nuclear bomb only produced around 20 kilotons of energy, or 2,500 times less. Multiplying those numbers by eight is an insanely scary exercise.

Here’s where we tell you not to worry though. First of all, this kind of bottom quark fusion is totally theoretical, it’s never been seen before. And, most importantly, we couldn’t make a bomb out of bottom quarks. That’s because they only exist for roughly one picosecond, or one-trillionth of a second. That’s barely enough time to record their existence, much less do anything with them. Hydrogen bombs are based on a principle of chain reactions, where one pair of fusing hydrogen atoms sets off the next, and so on. Bottom quarks could never do this because they don’t exist for long enough to set each other off.

“If I thought for a microsecond that this had any military applications, I would not have published it,” says co-author Marek Karliner of Tel Aviv University in Israel, speaking to _Live Science_.

The fusion of a single pair of bottom quarks might be possible, the researchers say, but that’s it. After that, they disappear, decaying into far lighter quarks that are nowhere near as dangerous.

So, planet-ending bottom quark bombs are nothing to worry about. The threat of thermonuclear war on the other hand…


----------



## ae1905

blogs.discovermagazine.com *

Why This Fungus Has Over 20,000 Sexes*
[HR][/HR] Every _Schizophyllum commune_ you see is likely a new gender. _(Credit: wasanajai/Shutterstock)_

Gender isn’t really a fungal construct.

Where we have two traditionally recognized genders, male and female, some species of fungi can have thousands. It sounds confusing, but it’s actually helpful — with so many variations, the fungi can mate with nearly every individual of their species they meet. It must make for a wild singles night.

*Sexy Fun Guys*

One species of fungi, _Schizophyllum commune_, really shines when it comes to gender diversity. The white, fan-shaped mushroom has more than 23,000 different sexual identities, a result of widespread differentiation in the genetic locations that govern its sexual behavior. For humans, and all animals, really, this would never fly, because we’ve evolved a very specific method of reproduction that involves specialized sexual organs to do the mating with and sex cells to carry the genetic information.

Fungi, by contrast, keep it casual. To mate, all a fungus has to do is bump up against another member of its species and let their cells fuse together. _S. commune_ uses a special kind of structure called a clamp connection to do this, and it allows them to exchange their cell’s nuclei, along with the genetic information inside. This keeps reproduction simple and means that a potentially huge number of sexes is possible — other fungi species have dozens or more, though _S. commune_ is certainly an outlier.

*It’s Not What You Think*

The “sexes” don’t really involve physical differences either, as we might think of when the word “sex” comes to mind. The variations are all in the genome, at two separate loci, or locations, each of which has two alleles, or alternate forms. The loci are called A and B and the alleles are termed “alpha” and “beta.” That makes four possible sexes, but there’s another twist. Every A-alpha/beta and B-alpha/beta can have many different variants, called specificities. It amounts to more than 339 specificities for A and 64 for B. Putting those two together yields thousands of possible unique sexes.

The fungus can mate with any specificity as long as it’s different somewhere on both A and B. So, two prospective mates could both have the same A-beta and B-alpha, but have different A-alphas and B-betas and they’d be fine to hook up. If they shared A-alpha and A-beta, though, their pheromones wouldn’t be compatible, meaning that they couldn’t carry out the reproductive process. That leaves a ton of options for mating, though, and essentially means that anyone a fungus meets is fair game for sexy time.

It also really helps spread genetic diversity around, because there are so many options. Think about that next time you’re looking for a date.

_(h/t Popular Science)_


----------



## ae1905

scientificamerican.com
*A Fantastic Journey through Cosmic Scales*
Caleb Scharf,Ron Miller

Do you want to hear the most epic story ever?

A long time ago the atoms in your body were spread across trillions of kilometers of otherwise empty space. Billions of years in the past there was no hint that they would eventually come to be configured as your eyes, your skin, your hair, your bones or the 86 billion neurons of your brain. Many of these atoms came from deep inside a star—perhaps several stars, themselves separated by many more trillions of kilometers. As these stars exploded, they hurled parts of themselves outward in a flood of scorching gas that filled a small part of one galaxy out of hundreds of billions of other galaxies, arrayed throughout a gaping span of space and time almost a trillion trillion kilometers across.

Our observable universe is a sphere some 93 billion light-years wide, or close to 1027 meters, with Earth at its center. Credit: Ron Miller

Some of these atoms have been in the shell of a trilobite, perhaps thousands of trilobites. Since then, they've been in tentacles, roots, feet, wings, blood, and trillions, quadrillions of bacteria in between. Some have floated in the eyes of creatures that once looked out across the landscapes of 100 million years ago. Yet others have nestled in the yolks of dinosaur eggs or hung in the exhaled breath of a panting creature in the depths of an ice age. For others, this is their first time settling into a living organism, having drifted through eons in oceans and clouds, part of a trillion raindrops or a billion snowflakes. Now, at this instant, they are all here, making you.

Each atom is itself a composite that's one tenth of a billionth of a meter across—sitting on the precipitous edge of a universe between our perceived reality and the quantum world. Electrons hazily occupy much of the atom's empty space. Protons and neutrons cluster in a nucleus, 100,000 times smaller than its atom, and are themselves composed of other stupendously small things: quarks and gluons. An electron may have no meaningful property of size but could be thought of as 10 million times smaller than the nucleus.

Add up all the recognizable matter and there may be 1080 particles such as protons, neutrons, electrons and other subatomic items in the known universe. That's a very big number, but it's also just peanuts, because there are probably a billion times more photons zinging around the cosmos. Yet this stuff is barely 5 percent of what we think is the total matter and energy content of space. Astronomical evidence suggests that there is a shadow realm of still mysterious subatomic particles and fundamental forces that constitute most of the cosmos—an underworld of dark matter and dark energy, dominating the entire universe but unseen by us.

Moreover, at some point 13.8 billion years ago all of this, seen and unseen alike, was squeezed into a far smaller, hugely energetic origin of space and time that we are still inside, along with any being that may exist a billion light-years from here. We're not truly disconnected, even now.

Quite a tall tale. Except this is not fiction, it is our current understanding of the universe and its history [see illustration above].

To examine and display what we truly know (and what we don't) about the entirety of nature, we turn to a tried-and-true approach—the simple premise of a 10-fold zooming view to tour the universe, from the edge of the observable cosmos to the innermost knots of reality. From fingers and toes to modern mathematics and measurements, we can all grasp the notion of powers of 10—sizes that shift by 10 times or by a tenth. Chain these sliding scales together across the three dimensions of space and that tricky thing we call time, and we have a language for expressing the continuities and relations of nature that extends far outside our common experience. The powers of 10 let us zoom from almost everything to nearly nothing.

Of course, a zoomable overview cannot recount every exact detail of the contents and history of the universe. Instead it takes us to specific waypoints throughout the physical scales of the cosmos across 62 orders of magnitude, from the quantum building blocks of the subatomic world to realms in which entire planets are mere droplets of frozen minerals and onward into the greatest expanse yet discovered, in which entire galaxies swarm like glinting motes of dust against the cosmic horizon.

This journey through all known scales of reality is, in essence, what “everything” really is. You might be tempted to ask what comes next. What is beyond the phenomenon we call the universe, beyond the everything? What might be “outside” the sunlit, mote-filled room of our observable reality? These are great questions, and in a very real sense anything “outside” our universe must be for now simply “not universe.” The threshold bridging these domains is a place hovering at our cosmic horizon, its scale set by the distance light travels during the age of the universe. Within the boundary is the observable universe. Just outside is a still-mysterious labyrinth.

We live our lives in a narrow slice of existence sandwiched between these extremes of the very small and the very large, looking, listening, smelling and feeling from inside the membranes of our mostly water multicellular bodies. Somehow, we construct meaning out of those senses, experience that slippery property known as consciousness and perhaps even possess that elusive quality we call intelligence. It may be that other complex life across the cosmos is built the same way, to act and feel and think just as we do. Or perhaps our biology is not the only way to construct living things; perhaps the notion that consciousness and intelligence arise from the electrochemical gunk of our brain does not apply to minds elsewhere in the universe.

Confronting these mysteries of self, and the many scales of the cosmos, all we can really do is cross our fingers and hope that our singular experience will not mislead us as we disentangle the big questions of existence.

Stuff of Life: At nanometer scales (10-9 meter), a molecule of DNA coils within the nucleus of a cell. Credit: Ron Miller

Altogether our situation is a bit farcical. We're in a horrendously unsuitable place for gaining objective truths about the nature of reality. Adrift on one small rocky planet that orbits one ordinary star out of a trillion trillion stars in the observable universe, each one of us is locked inside a singular, self-aware speck of flesh, embedded in a web of biological evolution that sprawls across the eons. Even our bodies are not wholly our own, because they also serve as Darwinian battlefields for trillions of bacteria and viruses. And all of it, all of life as we know it, seems to emerge solely from interactions among mind-numbingly large numbers of duplicated molecular structures—the exquisite architectural interplay of DNA and RNA, which itself arises from the physics of protons, neutrons, electrons and electromagnetic forces. Such tiny components simply follow the fundamental “rules” of the universe that were frozen into place some 13.8 billion years ago. Yet, in concert, they build galaxies, planets, humans, birds and who knows what else across the cosmos [see illustrations above].

How does all this happen? How did this epic story really begin, and how—if ever—will it finally end? Such questions sit at the heart of our efforts to construct a rational picture of nature from our inconvenient vantage point. Any answer is a work in progress but must already exist in hazy outline among all the myriad intersections of the universe's dizzying scales. We invite you to explore them and delight in their beauty. After all, this is your universe as much as anyone else's.

This article was originally published with the title "The Zoomable Universe"
MORE TO EXPLORE

Cosmic View: The Universe in Forty Jumps. Kees Boeke. John Day Company, 1957.

Powers of Ten: A Book about the Relative Size of Things in the Universe and the Effect of Adding Another Zero. Philip Morrison and Phylis Morrison. Scientific American Library, 1982.

Cosmic Eye. Video. Danail Obreschkow, 2012. 



FROM OUR ARCHIVES

Our Place in the Cosmos. Noam I. Libeskind and R. Brent Tully; July 2016.

Caleb Scharf

Caleb Scharf is director of the Columbia Astrobiology Center and author of Gravity's Engines (2012) and The Copernicus Complex (2014). He writes the Life, Unbounded blog for Scientific American and has written for many other publications. He lives in New York City with his wife and two daughters.

Credit: Nick Higgins

Ron Miller

Ron Miller is an award-winning illustrator and author whose work has appeared in Scientific American, National Geographic and Smithsonian, among many other publications, as well as in the definitive editions of Jules Verne's 20,000 Leagues Under the Sea and Journey to the Center of the Earth. He lives in Virginia.

Credit: Nick Higgins


----------



## ae1905

blogs.discovermagazine.com *

Human Brain 'Organoids' Implanted Into Rats*
[HR][/HR] A suspension of stem cells in liquid nitrogen. _(Credit: Elena Pavlovich/Shutterstock)_

Tiny brain “organoids,” or clusters of neurons grown from human stem cells, have been implanted into rats.

The news comes from _Stat_, and it seems that two different teams have managed to integrate human brain cells into rat brains. The organoids began stretching out new cells, and even showed signs of activity when the researchers shone lights at the rat’s eyes, a sign that they were functionally connected to the rats’ own neurons.

*Organ-ish*

It’s another step forward in the new, but rapidly progressing field of organoids, blobs of tissue grown outside the body that in some ways resemble our own organs. Researchers are beginning to use organoids to conduct tests of the human body that they couldn’t do on organs still locked inside us. For brain organoids this has included studies of Alzheimer’s, microcephaly, substance abuse and brain development. Other types of organoids have been used to test cancer treatments and new types of drugs, study genetic disorders, and much more.

They’re created by taking human cells, normally skin cells, and converting them to induced pluripotent stem cells, which can form several different kinds of tissues. The stem cells are bathed in a growth medium and allowed to mature, split and grow to form whole tissues that behave like organs. Scientists have observed brain cells begin to form 3-D structures, and some have even coaxed them into forming a cortex-like structure.

The organoids are usually grown in a dish, but implanting them into a living animal gives researchers a better idea of how these cells behave and if their creations work properly. It’s a step toward using them for actual tests of drugs and diseases, but right now researchers are trying to see how the organoids function when they’re implanted into a living creature.
*
Bright Future*

The results, to be announced at the annual meeting of the Society for Neuroscience in Washington, D.C., have been promising so far,_ Stat_ reports. Organoids from human stem cells have not only formed connections to the rat brains, but other researchers have managed to connect them to retinal cells and get a response. Harvard University geneticist George Church says that he’s managed to vascularize, or grow blood veins, in brain organoids. All the evidence seems to suggest that the organoids really do work, if not exactly like a human brain, at least well enough to run experiments on.

Researchers so far have been careful to note that the brain organoids are far from being actual brains. They don’t experience anything like consciousness and implanting them into rats and mice isn’t going to create Stuart Little. In this case, the rats were adults whose brains had stopped developing, limiting the organoids’ ability to ingratiate themselves even further. The organoids themselves were made of mature cells as well — trying something similar with brain stem cells might lead to more connections and a tighter interweaving with the rat’s brains.

The implications of giving animals human brain tissue — and thus potentially a measure of human consciousness — does raise ethical concerns. We’re far from that point at the moment, researchers say, but the technology could advance to the point where we can augment rat brains to become more human-like. If that happens, how should treat them? Some ethicists argue that we can no longer perform experiments on them as we have been in recognition of their greater humanity.

It’s also a chance to peer further into the murky depths of consciousness itself. A rat-human hybrid would certainly blur the line between human and animal consciousness, bringing us that much closer to figuring out what it is that makes us human.


----------



## ae1905

blogs.discovermagazine.com *

Massive Deposits of Water Ice Found on Mars*
[HR][/HR]







In this color-enhanced image of an eroding cliff (or scarp) on Mars taken from above, water ice is shown in blue. The top third of the image is the planet’s surface leading up to the cliff’s edge, while the bottom third is the valley below. _(Credit: NASA/JPL/University of Arizona/USGS)_

Despite the fact that Mars has an atmosphere just 1 percent as dense as Earth’s, the surface of the Red Planet still has to deal with plenty of weathering and erosion. In 2008, researchers even captured a full-scale avalanche on Mars as it plunged down a 2,300-foot slope into a valley. These types of geological events often expose the structures beneath the martian surface, revealing layers of rock, dry ice and even water ice.

In a study published Thursday in the journal _Science_, researchers using the Mars Reconnaissance Orbiter (MRO) investigated eight steep and eroded slopes (known as scarps) at various locations across Mars. At each of these locations, they found thick shelves of relatively pure water ice located as little as 3.3 feet below the planet’s surface. Furthermore, some of these massive ice deposits were found to be more than 330 feet thick.

While scientists have observed water ice on the surface of the Red Planet many times before, researchers rarely get a chance to learn this much about its layering, thickness, purity and prevalence.

According to the research paper, “The ice exposed by the scarps likely originated as snow that transformed into massive ice sheets, now preserved beneath less than 1 to 2 [meters] of dry and ice-cemented dust or regolith near ±55° latitude.” In 2008, the Phoenix Mars lander discovered similar ice deposits along martian scarps, but they were found in regions much closer to the planet’s northern pole.

Since the ice deposits discovered in today’s study were found intact along the scarps’ steep, eroded slopes, the researchers believe the ice is “cohesive and strong.” Furthermore, the team found that the ice appears banded, showing layered variations in its blue color. This suggests that the massive ice deposits are composed of many distinct layers that have been squished together over time, preserving a record of Mars’ climate history. However, because there are few craters near these sites, the authors suggest the ice was formed relatively recently—in the past million years or so.

Although the massive ice deposits formed quickly (geologically speaking), the researchers say they also recede a tiny bit each summer. In one scarp, the team found that over the course of only three martian years, multiple meter-wide boulders dislodged themselves from the ice deposits, tumbling down into the valley below. Based on this, the researchers estimated the ice is retreating (horizontally) at a rate of a few millimeters each year. This is probably due to the exposed, solid ice sublimating into gas as it contacts the thin martian air.

The discovery of these large reservoirs of pure water ice adds yet another piece of evidence supporting the increasingly held theory that water ice not only exists on Mars, but also is surprisingly common. Although the ice could obviously be used as a source of water for future manned missions to Mars, scientists have a long way to go before then. However, with the Mars 2020 rover just a few years away, the discovery of eight more tantalizing sites ripe for investigation is still an exciting find.

_This post originally appeared in Astronomy.com._


----------



## ae1905

blogs.scientificamerican.com *

The Science Community's "S**thole countries" Problem*

Nina Dudnik
[HR][/HR] It's easy (and right) to criticize Trump for his vulgar dismissal of developing countries, but scientists harbor their own prejudice









Scientist studying fruit flies in Nairobi, Kenya _Credit: Thomas Imo Getty Images_ 

The scandal-du-jour is that the Donald Trump is reported to have referred to El Salvador, Haiti and essentially the entire African continent as "s**thole" countries. Despite the eruption of condemnation from many corners, the truth may be uglier than we would like to admit—that many people subconsciously hold a blanket negative view of countries like these We in the science community are not exempt from this.

The news today is host to a small-but-steady stream of justifications for why certain countries are plagued by instability, corrupt leadership, poor infrastructure and basic services. This ignores the centuries of policies from multilateral organizations and wealthy countries to systematically plunder these countries of natural and human resources, prop up bad leaders and shackle their economies.

Similarly, when we criticize developing-country governments for not leading local investment in research and science we ignore the many years in which these same governments were advised to invest in primary education, basic infrastructure, anything but higher education and research. We allow international funders to justify taking a back seat and continue funding science disproportionately in wealthy countries.

Additional justifications are made that the urgency of finding cures for pandemics and other pressing problems requires funding scientists in scientifically-advanced countries to reach translatable results as quickly as possible. There is certainly a cost-benefit argument to be made for this, looking only at the scientific landscape of today, but this approach perpetuates the cycle of under-investment. Countries that are scientifically lagging today have no hope of advancing in the future without investment now.

In response to this, I frequently hear it argued that such scientists should therefore build their base by focusing on training students and practicing introductory level research, leaving the cutting-edge work to scientists in more advanced economies. In the most recent instance, I was told by an American scientist that a colleague in southern Africa should work on “setting up basic molecular biology techniques” and leave such things as CRISPR development to scientists like her. This argument is fundamentally patronizing and relegates scientists in certain countries to being hobbyists and not “real scientists”.

What’s more, this biased image of scientists from poorer countries locks them out of the cycle of science as we know it: funding builds strong infrastructure and secures high-level equipment and trainees, which produce robust trusted data, which lead to high profile publications, which beget more funding and collaborations. Scientists in the developing world cannot get a foot in the door without receiving research-based funds, which are far less likely to be available to set up a basic PCR lab for its own sake. It is in fact a very savvy move to strengthen molecular biology infrastructure by tying it to a high-tech, “sexy” area such as CRISPR.

Today my newsfeed is full of examples of noteworthy achievements, laudable moments of heroism and other signs that people from the countries singled out are in fact noble and worthy. This haste to "prove" the value of certain kinds of people through extraordinary accomplishments reveals a very deep level of prejudice.

Likewise in science we highlight the individuals who have managed to establish global reputations and produce unassailably respectable results in cutting-edge fields. These individuals rightly deserve funding, collaborations and accolades, but by celebrating them we do not absolve ourselves of bias against their countries and compatriots. Rather, we reinforce the “there can be only one” narrative and the idea that only superhuman efforts are evidence of worth, which in turn reinforce the implication that their environments as a whole are short on potential.

Reversing this broad-based level of implicit bias is not going to be simple. It has to be rooted out on many fronts. At the most significant level, the investment in leveling the playing field for scientific infrastructure must be everyone’s responsibility—governments of all countries as well as independent foundations and multilaterals. We must take into account the longstanding under-investment in certain regions and provide adequate funding to leapfrog researchers to a competitive level of infrastructure as well as fund them to carry out genuinely cutting-edge research.

Small-focus issues need fixing too. Something as simple as a researcher’s institutional affiliation can be reacted to reflexively as grounds for dismissal of a grant application or a manuscript. So many researchers are never discovered by potential collaborators, funders or journalists because their work is published in regional or national journals which are excluded from our publication indexing systems. Again, even the definition of high profile journals needs to be examined for deep-seated bias.

Very few of us would ever go on record saying that a country is a “s**thole” scientifically or otherwise. But this is not a debate about transgressing the lines of polite acceptable language. This is about ensuring that beyond our words, our core beliefs and our subsequent actions serve to create a scientific community that is at last, globally equitable. 

The views expressed are those of the author(s) and are not necessarily those of Scientific American. 









Nina Dudnik
Nina Dudnik is the founder and CEO of Seeding Labs, a nonprofit working to create a world where a generation of problem-solving scientists in every country around the globe have what they need to tackle the world's biggest issues. She is an OpEd Project Public Voices Fellow.
Credit: Image courtesy of Seeding Labs

*Latest News*


----------



## ae1905

scientificamerican.com *

NASA Test Proves Pulsars Can Function as a Celestial GPS*

Alexandra Witze,Nature magazine


----------



## ae1905

scientificamerican.com *

Why Are Some People More Creative Than Others?*

Roger Beaty,The Conversation US


^
can be interpreted as the interaction between N, especially Ne, and J

one would expect N-doms, especially Ne-doms, to be the most creative

Ne-doms are typically regarded as the creative types

the use of many different regions of the brain at once has been reported by dario nardi in Ne doms, and Ni doms to a less extent


----------



## ae1905

scientificamerican.com *

How to Stop Sex Changes in Turtles on the Great Barrier Reef*

Ana Rita Patricio,The Conversation US


----------



## ae1905

Fun Fact: Chameleon Bones Glow in the Dark By Nathaniel Scharping | January 17, 2018 4:05 pm 
Many chameleons glow brightly under a UV lamp. _(Credit: David Prötzel/ZSM/LMU)_

Shine an ultraviolet light on a chameleon in the dark, and it will light up with an eerie blue glow. It’s not their color-changing skin at play here, either. It’s their bones.


----------



## ae1905

What Happened the Last Time Antarctica Melted? By Eric Betz | January 18, 2018 2:34 pm


----------



## ae1905

*Hunter-Gatherers Are Masters of Smell*

By Matt Benoit | January 18, 2018 2:15 pm



^
I wonder if people who have a better sense of smell make better chefs?


----------



## ae1905

2017 Ranked Among Three Hottest Years EverThe average amount of heat absorbed and trapped in the upper ocean last year was also higher than ever seen before


By Mindy Weisberger, LiveScience on January 18, 2018

https://www.scientificamerican.com/article/2017-ranked-among-three-hottest-years-ever/


----------



## ae1905

Scientists Move Closer to a Universal Flu VaccineResearchers hope their new approach, which works well in lab animals, may save more lives


By Dina Fine Maron on January 18, 2018





https://www.scientificamerican.com/article/scientists-move-closer-to-a-universal-flu-vaccine/


----------



## ae1905

Climate
The Good and Bad News about Rising TemperaturesScientists believe they can better pinpoint how much future warming the world can expect


By Chelsea Harvey, ClimateWire on January 18, 2018

https://www.scientificamerican.com/article/the-good-and-bad-news-about-rising-temperatures/


----------



## ae1905

scientificamerican.com
 "Dark Matter" DNA Influences Brain Development Amy Maxmen,Nature magazine


----------



## ae1905

scientificamerican.com
 Cleaning Up Air Pollution May Strengthen Global Warming Chelsea Harvey,ClimateWire


----------



## ae1905

Scientists Have Figured Out Why Human Skin Doesn't Leak
Despite us losing 500 million skin cells per day.

https://www.sciencealert.com/this-is-why-human-skin-doesn-t-leak-biology?perpetual=yes&limitstart=1


----------



## ae1905

Physicists Say They've Created a Device That Generates 'Negative Mass'https://www.sciencealert.com/negative-mass-quasi-particle-polaritons-low-energy-lasers


----------



## ae1905




----------



## ae1905

Just Like Dolly: Scientists Clone 2 Monkeys By Charlotte Hu | January 24, 2018


----------



## ae1905

*In an Israeli Cave, Scientists Discover Jawbone of Earliest Modern Human Out of Africa*

By NICHOLAS ST. FLEURJAN. 25, 2018









A fossilized human jawbone discovered in Israel. The find may suggest that **** sapiens first migrated out of Africa at least 50,000 years earlier than previously thought. Credit Gerhard Weber, University of Vienna 

Scientists on Thursday announced the discovery of a fossilized human jawbone in a collapsed cave in Israel that they said is between 177,000 and 194,000 years old.

https://www.nytimes.com/2018/01/25/...lights&contentPlacement=1&pgtype=sectionfront


----------



## ae1905

Plastic Pollution Is Killing Coral Reefs, 4-Year Study Finds  (npr.org)


----------



## ae1905

Scientists Calculate Carbon Emissions of Your Sandwich  (theguardian.com)


----------



## ae1905

*A Private Place Where HIV, Zika and Ebola Hide*

Testicles protect viruses from immune attack, foiling attempts to extirpate the pathogens



By Shraddha Chakradhar, Nature Medicine on January 26, 2018 

https://www.scientificamerican.com/article/a-private-place-where-hiv-zika-and-ebola-hide/


----------



## ae1905

Genes that Your Parents Don't Pass To You Still Shape Who You Are, Study Finds   (sciencemag.org)


----------



## ae1905

Warming Threatens Reptiles More Than Birds and MammalsOver planetary history, warm-blood animals have outperformed cold-blooded animals in adapting to changing temperatures


By Chelsea Harvey, ClimateWire on January 30, 2018

https://www.scientificamerican.com/article/warming-threatens-reptiles-more-than-birds-and-mammals/


----------



## ae1905

Detailed image of red giant confirms theory about massive stars π1 Gruis is 530 light years away.


https://www.engadget.com/2018/01/30/detailed-image-of-gruis-red-giant-confirms-theory/


----------



## ae1905

Orca Quickly Learns to Mimic Human SpeechA killer whale picks up words like “hello” and “bye-bye,” some on the first attempt

https://www.scientificamerican.com/article/orca-quickly-learns-to-mimic-human-speech/


----------



## ae1905

*How Warp-Speed Evolution Is Transforming Ecology*

Darwin thought evolution was too slow to change the environment on observable timescales—ecologists are discovering that he was wrong


By Rachael Lallensack, Nature magazine on February 1, 2018


https://www.scientificamerican.com/article/how-warp-speed-evolution-is-transforming-ecology/


----------



## ae1905

Polar bears filmed themselves while hunting seals on sea ice, revealing why they are so at risk from global warming By Tom Yulsman | February 2, 2018


----------



## ae1905

Olympic Big Air Snowboarders Use Physics to Their AdvantageThe PyeongChang Winter Games will debut big air snowboarding, where athletes who master the laws of physics will be most likely to medal and avoid injury 

https://www.scientificamerican.com/...-snowboarders-use-physics-to-their-advantage/


----------



## ae1905

https://www.scientificamerican.com/article/in-case-you-missed-it9/


----------



## Taileile

I've been really into microbiomes recently and found out that there might be a correlation between C-section births and later issues like autoimmune diseases because the mother's microbiomes aren't able to colonize the infant properly. Of course C-sections are necessary in many cases, but it's important to keep in mind, especially with the growing rate of non-vaginal births. Microorganisms generally colonize your gut during delivery b/c the general consensus is that the gut is sterile in-utero. 

Here's a really interesting article on the C-section vs vaginal deliveries, and here's a really interesting article on fecal microbiome transplants and how they can be used to promote/prevent obesity.

Sorry if the topic is kind of gross, I just thought it was really neat!


----------



## ae1905

NIH Study Links Cellphone Radiation To Cancer In Male Rats  (techcrunch.com)


----------



## ae1905

Gut Microbes Combine To Cause Colon Cancer, Study Suggests  (nytimes.com)


----------



## Pifanjr

ae1905 said:


> NIH Study Links Cellphone Radiation To Cancer In Male Rats  (techcrunch.com)


https://www.theverge.com/platform/a...r-national-toxicology-program-study-rats-mice


----------



## Pexerr

Thank you, interesting to read.


----------



## ae1905

theatlantic.com A Pet Crayfish Can Clone Itself, and It's Spreading Around the World


----------



## ae1905

Scientists May Have Discovered the First Planets Outside the Milky Way  (washingtonpost.com)


----------



## ae1905

*The Arctic is Full of Toxic Mercury, and Climate Change is Going To Release it  (washingtonpost.com) *




> Mercury, a naturally occurring element, binds with living matter across the planet — but the Arctic is special. Normally, as plants die and decay, they decompose and mercury is released back to the atmosphere. But in the Arctic, plants often do not fully decompose. Instead, their roots are frozen and then become buried by layers of soil. This suspends mercury within the plants, where it can be remobilized again if permafrost thaws.
> 
> 
> How much would be released depends on how much the permafrost thaws — which in turn depends on the volume of greenhouse-gas emissions and subsequent warming of the planet. But permafrost thaw has begun in some places and scientists project that it will continue over the course of the century. The study says that with current emissions levels through 2100, permafrost could shrink by between 30 and 99 percent.


----------



## ae1905

https://www.scientificamerican.com/...oos-and-even-bees-can-be-righties-or-lefties/


----------



## ae1905

Scientists Create a New Form of Matter: Superionic Water Ice  (sciencemag.org)


----------



## ae1905

scientificamerican.com New Study Finds Cutting Oil Subsidies Will Not Stop Climate Change


----------



## ae1905

scientificamerican.com How Did Life Begin?


----------



## ae1905

*3 CRISPR Scientists Win Prestigious Award, Fanning Controversy over Credit*


----------



## ae1905

*Some Trees Beat Heat with Sweat*


----------



## ae1905

theatlantic.com
 The Increasingly Intricate Story of How the Americas Were Peopled


----------



## ae1905

*Scientists think they have found the reason some people are left-handed — and it has nothing to do with the brain*


----------



## ae1905

*Is Pluto Actually a Mash-Up of a Billion Comets?*


----------



## ae1905

*New Higgs Boson Observations Reveal Clues on the Nature of Mass*


----------



## ae1905

*NASA Mars Rover Finds Organic Matter in Ancient Lake Bed*


----------



## Pifanjr

https://www.nature.com/articles/d41586-018-05357-w


----------



## ae1905

*Bees Understand Zero, Zip, Nada*


----------



## ae1905

*A Serious New Hurdle For CRISPR: Edited Cells Might Cause Cancer, Find Two Studies*


----------



## ae1905

*Flashback Friday: Bumblebees detect electric fields with their body hair.*


----------



## ae1905

*The Standard Model (of Physics) at 50*


----------



## ae1905

*$950 Million Large Hadron Collider Upgrade 'Could Upend Particle Physics'*


----------



## ae1905

*The Milky Way Just Got Larger*


----------



## ae1905

*Weird Low-Light Bacteria Could Potentially Thrive on Mars

[URL="https://www.scientificamerican.com/article/too-small-for-big-muscles-tiny-animals-use-springs/"]Too Small for Big Muscles, Tiny Animals Use Springs**
*[/URL]


----------



## ae1905

*Dazzling satellite video reveals lightning dancing inside a mega-complex of thunderstorms*


----------



## ae1905

*Here's Why Expanding Protected Areas Isn't Saving Nature*


----------



## ae1905

*Stonehenge Builders Used Pythagoras' Theorem 2,000 Years Before He Was Born*


apparently, there were intps 4500 years ago


----------



## ae1905

What Over 1 Million Genomes Tell Us About Psychiatric Disorders


----------



## ae1905

*Einstein's Greatest Theory Validated on a Galactic Scale*


----------



## ae1905

*Scientists Pinpoint Brain Region That May Be Center of Alcohol Addiction*


----------



## ae1905

*Why Antarctica Is Getting Taller*


----------



## ae1905

*T. Rex Couldn't Stick Out Its Tongue*


----------



## ae1905

*Psychology, Neuroscience: Lacking in Individuality?*


----------



## ae1905

*Japan's Hyabusa-2 Will Soon Punch An Asteroid*


----------



## ae1905

*Becoming Fearless: Study Finds Major Changes to Domesticated Bunny Brains*


----------



## ae1905

*Ocean Spray On Saturn Moon Contains Crucial Constituents For Life*


----------



## ae1905

The First Dog: Genes Reveal Behavior Came First


----------



## ae1905

*We Still Have No Idea How To Eliminate More Than a Quarter of Energy Emissions*


----------



## ae1905

*Scientists Use Caffeine To Control Genes*


----------



## ae1905

*First Confirmed Image of a Newborn Planet Revealed*


----------



## ae1905

*The wreckage of a few ancient planets formed the asteroid belt*


----------



## ae1905

New moon supermoon on July 13 | Tonight | EarthSky


----------



## ae1905

*Telescope Offers 'Clearest View Yet' of Milky Way - Including Plasma Filaments  (ska.ac.za) *


----------



## ae1905

*These Bread-makers Predate Farming*


----------



## NeonMidget

ae1905 said:


> *These Bread-makers Predate Farming*


Where do you find all of these ????


----------



## ae1905

*ADHD Drugs Aren't Doing What You Think, Scientists Warn*


An anonymous reader quotes a report from Inverse: The study authors Lisa Weyandt, Ph.D., a professor of psychology at the University of Rhode Island, and Tara White, Ph.D., an assistant professor of Behavioral and Social Sciences at Brown University, started out investigating the effects of ADHD medications in students that actually have a diagnosable attention deficit disorder. They showed that in these students, there is decreased activity in the areas of the brain controlling "executive functions," which can make it hard for them to stay organized or focused. But because both authors work with college students, they soon became more interested in the misuse of Adderall. In students whose brains aren't affected by ADHD, does Adderall act as a supercharger? Does it make those areas fly into overdrive and unlock otherwise untapped intellectual ability, as all pill-popping students hope?


----------



## ae1905

*An Enormous Study of the Genes Related to Staying in School*


----------



## ae1905

*What to Expect for Friday's Record-Breaking Lunar Eclipse*


----------



## ae1905

*Magnetic Fields May Be to Blame for Jupiter's Skin-Deep Stripes*


----------



## ae1905

*Summer Weather Is Getting 'Stuck' Due To Arctic Warming*

An anonymous reader quotes a report from The Guardian: Summer weather patterns are increasingly likely to stall in Europe, North America and parts of Asia, according to a new climate study that explains why Arctic warming is making heatwaves elsewhere more persistent and dangerous. Rising temperatures in the Arctic have slowed the circulation of the jet stream and other giant planetary winds, says the paper, which means high and low pressure fronts are getting stuck and weather is less able to moderate itself.


----------



## ae1905

*Scientists Find Direct Evidence of Ice On the Moon*

According to a new study, published today in the Proceedings of the National Academy of Sciences, scientists have found the first direct evidence of frozen water on the Moon's poles.


----------



## ae1905

*Construction Begins On $1 Billion Telescope That Will Take Pictures 10 Times Sharper Than Hubble's*

The $1 billion Giant Magellan Telescope in Chile is officially under construction with a scheduled date of operation in 2024. The telescope "will have an array of seven enormous mirrors totaling 80 feet in diameter, giving it 10 times the precision of the Hubble telescope," reports Quartz. "Among its advances is technology to help it correct for the distorting effect of Earth's atmosphere by using software to make hundreds of adjustments per second to its array of secondary mirrors."


----------



## ae1905

blogs.discovermagazine.com 

*Millions of Tiny Seashells Are Affecting How Clouds Form*


----------



## ae1905

*NASA apps take you to space with VR and selfies*


----------



## ae1905

*Scientists Stunned By a Neanderthal Hybrid Discovered in a Siberian Cave*


----------



## ae1905

*Black Holes Bolster Case For Quantum Physics' Spooky Action*


----------



## ae1905

*Forget "Manned" Missions--Females May Be More Mentally Resilient in Deep Space*

A controversial new study in lab mice hints at sex-based differences in cosmic ray–induced cognitive decline


----------



## ae1905

*NASA Releases Thousands of Hours of Apollo 11 Mission Audio*

NASA and the University of Texas have teamed up to digitize 19,000 hours of recordings from the Apollo 11 mission that landed the first two people on the moon.


----------



## ae1905

*Heres why the National Academy of Sciences thinks simple 'sugar molecules' will usher us into a new era of modern medicine* 








UC Davis Health


*Within our body, simple sugar molecules can be connected together to create powerful structures that have recently been found to be linked to numerous health problems.*
*These long sugar chains that cover each of our cells are called glycans.*
*According to the National Academy of Sciences, creating a map of the glycan's location and structure will usher us into a new era of modern medicine.*


----------



## ae1905

*The Science of the Tennis Grunt*

A burgeoning field of study analyzes the effects of players’ grunts, from Sharapova’s shriek to Rafael Nadal’s chainsaw wail.


----------



## ae1905

*NASA explores product endorsements and rocket naming rights*


----------



## ae1905

*Bizarre Hexagon On Saturn May Be 180 Miles Tall*

Iwastheone shares a report from Space.com: The weird hexagon swirling around Saturn's north pole is much taller than scientists had thought, a new study suggests. Researchers have generally regarded the 20,000-mile-wide (32,000 kilometers) hexagon -- a jet stream composed of air moving at about 200 mph (320 km/h) -- as a lower-atmosphere phenomenon, restricted to the clouds of Saturn's troposphere. But the bizarre structure actually extends about 180 miles (300 km) above those cloud tops, up into the stratosphere, at least during the northern spring and summer, a new study suggests. The hexagon, which surrounds a smaller circular vortex situated at the north pole, has existed for at least 38 years; NASA's Voyager 1 and Voyager 2 spacecraft spotted the sharp-cornered feature when they flew by Saturn in 1980 and 1981, respectively.


----------



## ae1905

*Pulsar Discoverer Jocelyn Bell Burnell Wins $3-Million Breakthrough Prize* 

The award recognizes not only the astrophysicist’s transformative discovery, but also her subsequent work to promote equality and diversity in science


----------



## ae1905

*To Find Alien Life, NASA Needs Bigger, Bolder Exoplanet-Hunting Telescopes* 

A new, prestigious report charts an ambitious future for the space agency’s burgeoning search for Earth 2.0


----------



## ae1905

*Don't Be Fooled: Weather Is Not Climate* 

But climate affects weather


----------



## ae1905

*South Pole Telescope will study 'noise' from the early universe*


----------



## ae1905

*How Mammals Maintain Symmetry during Development* 

Communication with the placenta is key to ensuring body parts grow at the same rate


----------



## ae1905

*Astronomers have discovered black holes don't just 'eat' stars — they 'burp' them back up as 'stellar ghosts'*


----------



## ae1905

*Harvard Researchers Suggest Interstellar Object Might Have Been From Alien Civilization*


----------



## ae1905

*How Biologists Are Creating Life-like Cells From Scratch*


----------



## ae1905

*The 19th-Century Origins of Climate Science*


----------



## ae1905

*We Burn More Calories in the Afternoon and Evening, Study Finds*


----------



## ae1905

ae1905 said:


> Climate change: Oceans 'soaking up more heat than ... - BBC.com



*Scientists Acknowledge Key Errors in Study of How Fast the Oceans Are Warming*

A major study claimed the oceans were warming much faster than previously thought. But researchers now say they can't necessarily make that claim.


----------



## ae1905

*A Massive Impact Crater Has Been Detected Beneath Greenland's Ice Sheet*


----------



## ae1905

*Climate Change is Making Hurricanes Even More Destructive, Research Finds*


----------



## ae1905




----------



## ae1905

*The kilogram has officially been redefined*


----------



## ae1905

*Science is Getting Less Bang for Its Buck*


----------



## ae1905




----------



## ae1905

*A History of All the Times We've Sent Missions to Mars and Failed*


----------



## ae1905

A Helping of Facts With Your Thanksgiving Dinner
Biology. Chemistry. Physics. It’s all there on your plate. Take a moment to appreciate it.


----------



## ae1905

^


if you talk about _science _I'm sure you won't get any arguments this thanksgiving!


----------



## ae1905

*Nearby Star Is Sun's Long-Lost Sibling*


----------



## ae1905

*Have Astronomers Found Another "Alien Megastructure" Star?* 

Scientists now have a second example of a strange stellar phenomenon speculatively linked to extraterrestrial intelligence in 2015


----------



## ae1905

newyorker.com 

*An Oral History of Isaac Newton “Discovering” Gravity, as Told by His Contemporaries*


----------



## ae1905

*Seaweed Could Make Cows Burp Less Methane and Cut Their Carbon Hoofprint*


----------



## ae1905




----------



## ae1905

*Ancient hippo-like reptile was a giant to rival the dinosaurs*

We thought only dinosaurs grew into giants during the Triassic, but we've discovered fossils of a mammal-like reptile that was 5 metres long and 3 metres tall


----------



## ae1905

*Pterosaurs Just Keep Getting Weirder* 

They beat birds at powered flight. Were they also a step ahead with feathers?


----------



## ae1905




----------



## ae1905

*Cap-and-Trade for Cars Is Coming to the Northeast* 

Nine states and Washington, D.C., aim to rein in the rising share of emissions from transportation


----------



## ae1905

*Scientists Find a Brain Circuit That Could Explain Seasonal Depression*


----------



## ae1905

*Could Extraterrestrial Sugar Explain How Life Began on Earth?*

Extraterrestrial Sugar Scientists have discovered derivatives of life's building blocks in carbon-rich meteorite samples, a first. They also showed how biological compounds can form in interstellar space. These new findings support the theory that life on Earth originated with help from cosmic impacts. Sugars and sugar derivatives are essential to life on Earth. But they, along with amino acids and other organic molecules, can be found in space as well, on asteroids and comets.


----------



## ae1905

*51st Known Mersenne Prime Number Found*

The Great Internet Mersenne Prime Search (GIMPS) has discovered the largest known prime number, 2^82,589,933-1, having *24,862,048 digits*. A computer volunteered by Patrick Laroche from Ocala, Florida made the find on December 7, 2018.


----------



## ae1905

*'Sending Astronauts To Mars Would be Stupid'*

One of the first men to orbit the Moon has told BBC Radio 5 Live that it's "stupid" to plan human missions to Mars. Bill Anders, lunar module pilot of Apollo 8, the first human spaceflight to leave Earth's orbit, said sending crews to Mars was "almost ridiculous".


----------



## ae1905

*A Journey Into the Solar System's Outer Reaches, Seeking New Worlds To Explore*

NASA's New Horizons spacecraft will visit a tiny and mysterious object in the Kuiper belt on Tuesday, seeking clues to the formation of our cosmic neighborhood.


----------



## ae1905

*In Case You Missed It*


----------



## ae1905

the infj of birds finds a new home



*World's Rarest Bird, Madagascar Pochard, Gets New Home*

The rarest bird in the world -- a species of duck called the Madagascar pochard -- has been given a new home in time for the new year.


----------



## ae1905

*The Year in Science--and What Americans Thought about It* 

Pew polls reveal a public divided on climate, supportive of NASA and wary of AI and genetic engineering


----------



## ae1905

*Hubble telescope 'mother' Nancy Grace Roman dies*


----------



## ae1905

*Jocelyn Bell Burnell and the Discovery of Pulsars* 

Jocelyn Bell Burnell discovered pulsars (a specific type of neutron star) and got zero credit for it until recently. Here's her story


----------



## ae1905

*Astrobiology Highlights of 2018* 

A very incomplete list of contributions furthering our search for life elsewhere (and other stuff)


----------



## Loaf

https://science.howstuffworks.com/space/aliens-ufos/could-dark-matter-spawn-shadow-life.htm


----------



## ae1905

*Scientists Drill Into 3,500 Feet of Ice To Reach a Mysterious Antarctic Lake*

Late last week, a team of about 50 scientists, drillers, and support staff successfully punched through nearly 4,000 feet of ice to access an Antarctic subglacial lake for just the second time in human history.


----------



## ae1905

*NASA Spacecraft Confirms Successful Flyby of Distant Solar System Object*

NASA received a critical signal from one of its most distant spacecrafts this morning, confirming that the vehicle has just flown by a tiny frozen rock in the outer reaches of the Solar System. From a report: That space probe, named New Horizons, has now made history. Currently located more than 4 billion miles from Earth, the spacecraft has now whizzed past the most distant -- and most primitive -- object that's ever been visited by humanity.


----------



## ae1905

*China Successfully Lands Spacecraft On Far Side of the Moon*


----------



## ae1905

*Al Gore: America Is Close to a ‘Political Tipping Point’ on Climate Change*


----------



## ae1905

*Congress just got a bumper-crop of scientists. Meet the 10 new science whizzes on Capitol Hill.*


----------



## ae1905

*Once Considered Outlandish, the Idea That Plants Help Their Relatives is Taking Root*

An anonymous reader shares a report: A Canadian biologist planted the seed of the idea more than a decade ago, but many plant biologists regarded it as heretical -- plants lack the nervous systems that enable animals to recognize kin, so how can they know their relatives? But with a series of recent findings, the notion that plants really do care for their most genetically close peers -- in a quiet, plant-y way -- is taking root.


----------



## ae1905

*Will the World Embrace Plan S, the Radical Proposal To Mandate Open Access To Science Papers?*


----------



## ae1905

*Super Blood Wolf Moon Eclipse Is Coming Later This Month*

The "super blood wolf moon eclipse" is coming to a sky near you later this month. "The total lunar eclipse will start late on Sunday, Jan. 20 and finish early on Monday, Jan. 21," reports USA Today.


----------



## ae1905

*Cosmic Collision Created "Snowman" MU69—the Farthest World Ever Explored* 

Close-up images from NASA’s New Horizons probe show the space rock has two distinct lobes


----------



## ae1905

CO2 Emissions Reached an All-Time High in 2018 - Scientific ...


----------



## ae1905

*All Sand on Earth Could Be Made of Star Stuff* 

Silica, a common ingredient in sand, concrete and glass, may have its origins in supernovae


----------



## ae1905

*IBM is turning to your smartphone to improve weather forecasts*


----------



## ae1905

*Scientists Say They're Close to Making A Spicy Tomato*


----------



## ae1905

*Plants Can Hear Animals Using Their Flowers*


----------



## ae1905

Ocean Warming Is Accelerating Faster Than Thought, New Research FindsAn analysis concluded that Earth’s oceans are heating up 40 percent faster on average than a United Nations panel estimated five years ago, a finding with dire implications for climate change.
By Kendra Pierre-Louis


----------



## Pifanjr

ae1905 said:


> Ocean Warming Is Accelerating Faster Than Thought, New Research FindsAn analysis concluded that Earth’s oceans are heating up 40 percent faster on average than a United Nations panel estimated five years ago, a finding with dire implications for climate change.
> By Kendra Pierre-Louis


Didn't you post a study recently saying the oceans were heating up 60% faster, then an update that the math they used was wrong?


----------



## ae1905

*Researchers may have witnessed the birth of a black hole*


----------



## ae1905

Pifanjr said:


> Didn't you post a study recently saying the oceans were heating up 60% faster, then an update that the math they used was wrong?



that other study made an error in its estimate of the uncertainty of the rate of warming...it didn't contradict the conclusion that the oceans were heating up at a faster rate


this study was done by a separate team and used data from another source, an older technology using thermocouples dangled from ocean ships


----------



## ae1905

*An Ocean Engineer and a Nuclear Physicist Walk Into Congress …* 

Eight Democratic scientists won House seats in November, campaigning on issues like offshore drilling and climate change. Now they want to make Congress more scientific.


----------



## ae1905

*CERN's New Collider Design Is Four Times Larger Than the LHC*

If built, the Future Circular Collider will be 10 times more powerful than the Large Hadron Collider, and could discover new types of particles.


----------



## ae1905

*Giant Leaf For Mankind? China Germinates First Seed on Moon*

A small green shoot is growing on the moon after a cotton seed germinated onboard a Chinese lunar lander, scientists said.


----------



## ae1905

*China and NASA Shared Data About Historic Moon Landing*


----------



## ae1905

*'Mona Lisa Effect' Is Real But Doesn't Apply To Leonardo's Painting*


----------



## ae1905

*Key West Moves To Ban Sunscreens That Could Damage Reefs*


----------



## ae1905

*Fasting Can Improve Overall Health By Causing Circadian Clocks In the Liver and Skeletal Muscle To Rewire Their Metabolism, Study Finds*


----------

