Advertise

Random Template

Recent Post

Popular Posts

Random Post

Wednesday, 27 June 2012

Sifting Through a Trillion Electrons

Modern research tools like supercomputers, particle colliders, and telescopes are generating so much data, so quickly, many scientists fear that soon they will not be able to keep up with the deluge.
After querying a dataset of 114,875,956,837 particles for those with energy values less than 1.5, FastQuery identifies 57,740,614 particles, which are mapped on this plot. (Credit: Image by Oliver Rubel, Berkeley Lab.)



"These instruments are capable of answering some of our most fundamental scientific questions, but it is all for nothing if we can't get a handle on the data and make sense of it," says Surendra Byna of the Lawrence Berkeley National Laboratory's (Berkeley Lab's) Scientific Data Management Group.
That's why Byna and several of his colleagues from the Berkeley Lab's Computational Research Division teamed up with researchers from the University of California, San Diego (UCSD), Los Alamos National Laboratory, Tsinghua University, and Brown University to develop novel software strategies for storing, mining, and analyzing massive datasets -- more specifically, for data generated by a state-of-the-art plasma physics code called VPIC.
When the team ran VPIC on the Department of Energy's National Energy Research Scientific Computing Center's (NERSC's) Cray XE6 "Hopper" supercomputer, they generated a three-dimensional (3D) magnetic reconnection dataset of a trillion particles. VPIC simulated the process in thousands of time-steps, periodically writing a massive 32 terabyte (TB) file to disk at specified times.
Using their tools, the researchers wrote each 32 TB file to disk in about 20 minutes, at a sustained rate of 27 gigabytes per second (GB/s). By applying an enhanced version of the FastQuery tool, the team indexed this massive dataset in about 10 minutes, then queried the dataset in three seconds for interesting features to visualize.
"This is the first time anyone has ever queried and visualized 3D particle datasets of this size," says Homa Karimabadi, who leads the space physics group at UCSD.
The Problem with Trillion Particle Datasets
Magnetic reconnection is a process where the magnetic topology in a plasma (a gas made up of charged particles) is rearranged, leading to an explosive release of energy in form of plasma jets, heated plasma, and energetic particles. Reconnection is the mechanism behind the aurora borealis (a.k.a. northern lights) and solar flares, as well as fractures in Earth's protective magnetic field -- fractures that allow energetic solar particles to seep into our planet's magnetosphere and wreak havoc in electronics, power grids and space satellites.
According to Karimabadi, one of the major unsolved mysteries in magnetic reconnection is the conditions and details of how energetic particles are generated. But until recently, the closest that any researcher has come to studying this is by looking at 2D simulations. Although these datasets are much more manageable, containing at most only billions of particles, Karimabadi notes that lingering magnetic reconnection questions cannot be answered with 2D particle simulations alone. In fact, these datasets leave out a lot of critical information.
"To answer these questions we need to take into full account additional effects such as flux rope interactions and resulting turbulence that occur in 3D simulations," says Karimabadi. "But as we add another dimension, the number of particles in our simulations grows from billions to trillions. And it is impossible to pull up a trillion-particle dataset on your computer screen; it would just fill up your screen with black dots."
To address the challenges of analyzing 3D particle data, Karimabadi and a team of astrophysicists joined forces with the ExaHDF5 team, a Department of Energy funded collaboration to develop high performance I/O and analysis strategies for future exascale computers. Prabhat, of Berkeley Lab's Visualization Group, leads the ExaHDF5 team.
A Scalable Storage Approach Sets Foundation for a Successful Search
According to Byna, VPIC accurately models the complexities of magnetic reconnection at this scale by breaking down the "big picture" into distinct pieces, each of which are assigned, using Message Passing Interface (MPI), to a group of processors to compute. These groups, or MPI domains, work independently to solve their piece of the problem. By subdividing the work, researchers can simultaneously employ hundreds of thousands of processors to simulate a massive and complex phenomenon like magnetic reconnection.
In the original implementation of VPIC, each MPI domain generates a binary file once it finishes processing its assigned piece of the problem. This ensures that the data is written efficiently.
But according to Byna, this approach, called file-per-process, has a number of limitations. One major limitation is that the number of files generated for large-scale scientific simulations, like magnetic reconnection, can become unwieldy. In fact, his team's largest VPIC run on Hopper contained about 20,000 MPI domains -- that's 20,000 binary files per time-step. And because most analysis tools cannot easily read binary files, another post-processing step would have been required to re-factor the data into a format that these tools can open.
"It takes a really long time to perform a simple Linux search of a 20,000-file directory; and since the data is not stored in standard data formats, such as HDF5, existing data management and visualization tools cannot directly work with the binary file," says Byna. "Ultimately, these limitations become a bottleneck to scientific analysis and discovery."
But by incorporating H5Part code into the VPIC codebase, Byna and his colleagues managed to overcome all of these challenges. H5Part is an easy-to-use veneer layer on top of HDF5, that allows for the management and analysis of extremely large particle and block-structured datasets.
According to Prabhat, this easy modification to the code-base creates one shared HDF5 file per time-step, instead of 20,000 independent binary files. Because most visualization and analysis tools can use HDF5 files, this approach eliminates the need to re-format the data. With the latest performance enhancements implemented by the ExaHDF5 team, VPIC was able to write each 32 TB time-step to disk at a sustained rate of 27 GB/s.
"This is quite an achievement when you consider that the theoretical peak I/O for the machine is about 35 GB/s," says Prabhat. "Very few production I/O frameworks and scientific applications can achieve that level of performance."
Mining a Trillion Particle Dataset with FastQuery
Once this torrent of information has been generated and stored, the next challenge that researchers face is how to make sense of it. On this front, ExaHDF5 team members Jerry Chou and John Wu implemented an enhanced version of FastQuery, an indexing and querying tool. Using this technique, they indexed the trillion-particle, 32 TB dataset in about 10 minutes, and queried the massive dataset for particles of interest in approximately three seconds. This was the first time anybody has successfully queried a trillion-particle dataset this quickly.
The team was able to accelerate FastQuery's indexing and query capabilities by implementing a hierarchical load-balancing strategy that involves a hybrid of MPI and Pthreads. At the MPI level, FastQuery breaks up the large dataset into multiple fixed-size sub-arrays. Each sub-array is then assigned to a set of compute nodes, or MPI domains, which is where the indexing and querying occurs.
The load-balancing flexibility happens within these MPI domains, where the work is dynamically pooled among threads -- which are the smallest unit of processing that can be scheduled by an operating system. When constructing the indexes, the threads build bitmaps on the sub-arrays and store them into the same HDF5 file. When evaluating a query, the processors apply the query to each sub-array and return results.
Because FastQuery is built on the FastBit bitmap indexing technology, Byna notes that researchers can search their data based on an arbitrary range of conditions that is defined by available data values. This essentially means that a researcher can now feasibly search a trillion particle dataset and sift out electrons by their energy values.
According to Prabhat, this unique querying capability also serves as the basis for successfully visualizing the data. Because typical computer displays contain on the order of a few million pixels, it is simply impossible to render a dataset with trillions of particles. So to analyze their data, researchers must reduce the number of particles in their dataset before rendering. The scientists can now achieve this by using the FastQuery tool to identify the particles of interest to render.
"Although our VPIC runs typically generate two types of data -- grid and particle -- we never did a whole lot with the particle data because it was really hard to extract information from a trillion particle dataset, and there was no way to sift out the useful information," says Karimabadi.
But with the new query-based visualization techniques, Karimabadi and his team were finally able to verify the localization behavior of energetic particles, gain insights into the relationship between the structure of the magnetic field and energetic particles, and investigate the agyrotropic distribution of particles near the reconnection hot-spot in a 3D trillion particle dataset.
"We have hypothesized about these phenomena in the past, but it was only the development and application of these new analysis tools that enabled us to unlock the scientific discoveries and insights," says Karimabadi. "With these new tools, we can now go back to our archive of particle datasets and look at the physics questions that we couldn't get at before."
"Most of today's simulation codes generate datasets on the order of tens of millions to a few billon particles, so a trillion-particle dataset -- that is, a million-million particles -- poses unprecedented data management challenges," says Prabhat. "In this work, we have demonstrated that the HDF5 I/O middleware and the FastBit indexing technology can handle these truly massive datasets and operate at scale on current petascale platforms."
But according to Prabhat, exascale platforms will produce even larger datasets in the near future, and researchers need to come up with novel techniques and usable software that can facilitate scientific discovery going forward. He notes that one of the primary goals of the ExaHDF5 team is to scale the widely used HDF5 I/O middleware to operate on modern petascale and future exascale platforms.
In addition to Byna, Chou, Karimabadi, Prabhat and Wu, other members of this collaboration include Oliver Rubel, William Daughton, Vadim Roytershteyn, Wes Bethel, Mark Howison, Ke-Jou Hsu, Kuan-Wu Lin, Arie Shoshani and Andrew Uselton. The team also acknowledges critical support provided by NERSC consultants, NERSC system staff, and Cray engineers in assisting with their large-scale runs.

Rewriting Quantum Chips with New Laser Technique


Rewriting Quantum Chips With a Beam of Light: Laser Technique Brings Ultrafast Computing Closer to Reality


The promise of ultrafast quantum computing has moved a step closer to reality with a technique to create rewritable computer chips using a beam of light. Researchers from The City College of New York (CCNY) and the University of California Berkeley (UCB) used light to control the spin of an atom's nucleus in order to encode information.
The technique could pave the way for quantum computing, a long-sought leap forward toward computers with processing speeds many times faster than today's. The group will publish their results on June 26 inNature Communications.
Current electronic devices are approaching the upper limits in processing speed, and they rely on etching a pattern into a semiconductor to create a chip or integrated circuit. These patterns of interconnections serve as highways to shuttle information around the circuit, but there is a drawback.
"Once the chip is printed, it can only be used one way," explained Dr. Jeffrey Reimer, UCB professor of chemical and biomolecular engineering and the study co-author.
The team -- including CCNY Professor of Physics Carlos Meriles and PhD graduate students Jonathan King of UCB and Yunpu Li of CCNY- saw a remedy for these problems in the emerging sciences of spintronics and quantum computing.
They have developed a technique to use laser light to pattern the alignment of "spin" within atoms so that the pattern can be rewritten on the fly. Such a technique may one day lead to rewritable spintronic circuits.
Digital electronics and conventional computing rely on translating electrical charges into the zeros and ones of binary code. A "spintronics" computer, on the other hand, would use the quantum property of electron spin, which enables the electron to store any number between zero and one.
Imagine this as if the electron were a "yin-yang" symbol in which the proportions of the dark and light areas -- representing values from zero to one -- could vary at will. This would mean that multiple computations could be done simultaneously, which would amp up processing power.
Attempts at using electrons for quantum computing have been plagued, however, by the fact that electron spins switch back and forth rapidly. Thus, they make very unstable vehicles to hold information. To suppress the random switching back and forth of electrons, the UCB and CCNY researchers used laser light to produce long-lasting nuclear spin "magnets" that can pull, push, or stabilize the spins of the electrons.
They did this by illuminating a sample of gallium arsenide -- the same semiconductor used in cell phone chips -- with a pattern of light, much as lithography etches a physical pattern onto a traditional integrated circuit. The illuminated pattern aligned the spins of all the atomic nuclei, and, thus, their electrons, at once, creating a spintronic circuit.
"What you could have is a chip you can erase and rewrite on the fly with just the use of a light beam," said Professor Meriles. Changing the pattern of light altered the layout of the circuit instantly.
"If you can actually rewrite with a beam of light and alter this pattern, you can make the circuit morph to adapt to different requirements," he added. "Imagine what you can make a system like that do for you!"


Tuesday, 26 June 2012

Gravitational Lensing: Astronomers Spot Rare Arc from Hefty Galaxy Cluster

Seeing is believing, except when you don't believe what you see. Astronomers using NASA's Hubble Space Telescope have found a puzzling arc of light behind an extremely massive cluster of galaxies residing 10 billion light-years away. The galactic grouping, discovered by NASA's Spitzer Space Telescope, was observed as it existed when the universe was roughly a quarter of its current age of 13.7 billion years. The giant arc is the stretched shape of a more distant galaxy whose light is distorted by the monster cluster's powerful gravity, an effect called gravitational lensing. The trouble is, the arc shouldn't exist.



"When I first saw it, I kept staring at it, thinking it would go away," said study leader Anthony Gonzalez of the University of Florida in Gainesville, whose team includes researchers from NASA's Jet Propulsion Laboratory, Pasadena, Calif. "According to a statistical analysis, arcs should be extremely rare at that distance. At that early epoch, the expectation is that there are not enough galaxies behind the cluster bright enough to be seen, even if they were 'lensed,' or distorted by the cluster. The other problem is that galaxy clusters become less massive the further back in time you go. So it's more difficult to find a cluster with enough mass to be a good lens for gravitationally bending the light from a distant galaxy."
Galaxy clusters are collections of hundreds to thousands of galaxies bound together by gravity. They are the most massive structures in our universe. Astronomers frequently study galaxy clusters to look for faraway, magnified galaxies behind them that would otherwise be too dim to see with telescopes. Many such gravitationally lensed galaxies have been found behind galaxy clusters closer to Earth.
The surprise in this Hubble observation is spotting a galaxy lensed by an extremely distant cluster. Dubbed IDCS J1426.5+3508, the cluster is the most massive found at that epoch, weighing as much as 500 trillion suns. It is 5 to 10 times larger than other clusters found at such an early time in the history of the universe. The team spotted the cluster in a search using NASA's Spitzer Space Telescope in combination with archival optical images taken as part of the National Optical Astronomy Observatory's Deep Wide Field Survey at the Kitt Peak National Observatory, Tucson, Ariz. The combined images allowed them to see the cluster as a grouping of very red galaxies, indicating they are far away.
This unique system constitutes the most distant cluster known to "host" a giant gravitationally lensed arc. Finding this ancient gravitational arc may yield insight into how, during the first moments after the Big Bang, conditions were set up for the growth of hefty clusters in the early universe.
The arc was spotted in optical images of the cluster taken in 2010 by Hubble's Advanced Camera for Surveys. The infrared capabilities of Hubble's Wide Field Camera 3 helped provide a precise distance, confirming it to be one of the farthest clusters yet discovered.
Once the astronomers determined the cluster's distance, they used Hubble, the Combined Array for Research in Millimeter-wave Astronomy (CARMA) radio telescope, and NASA's Chandra X-ray Observatory to independently show that the galactic grouping is extremely massive.
"The chance of finding such a gigantic cluster so early in the universe was less than one percent in the small area we surveyed," said team member Mark Brodwin of the University of Missouri-Kansas City. "It shares an evolutionary path with some of the most massive clusters we see today, including the Coma cluster and the recently discovered El Gordo cluster."
An analysis of the arc revealed that the lensed object is a star-forming galaxy that existed 10 billion to 13 billion years ago. The team hopes to use Hubble again to obtain a more accurate distance to the lensed galaxy. The team's results are described in three papers, which will appear online today and will be published in the July 10, 2012 issue of The Astrophysical Journal. Gonzalez is the first author on one of the papers; Brodwin, on another; and Adam Stanford of the University of California at Davis, on the third. Daniel Stern and Peter Eisenhardt of JPL are co-authors on all three papers.

Moderate Coffee Consumption Offers Protection Against Heart Failure

While current American Heart Association heart failure prevention guidelines warn against habitual coffee consumption, some studies propose a protective benefit, and still others find no association at all. Amidst this conflicting information, research from Beth Israel Deaconess Medical Center attempts to shift the conversation from a definitive yes or no, to a question of how much.



"Our results did show a possible benefit, but like with so many other things we consume, it really depends on how much coffee you drink," says lead author Elizabeth Mostofsky, MPH, ScD, a post-doctoral fellow in the cardiovascular epidemiological unit at BIDMC. "And compared with no consumption, the strongest protection we observed was at about four European, or two eight-ounce American, servings of coffee per day."
The study published June 26 online in the journal Circulation: Heart Failure, found that these moderate coffee drinkers were at 11 percent lower risk of heart failure.
Data was analyzed from five previous studies -- four conducted in Sweden, one in Finland -- that examined the association between coffee consumption and heart failure. The self-reported data came from 140,220 participants and involved 6,522 heart failure events.
In a summary of the published literature, the authors found a "statistically significant J-shaped relationship" between habitual coffee consumption and heart failure, where protective benefits begin to increase with consumption maxing out at two eight-ounce American servings a day. Protection slowly decreases the more coffee is consumed until at five cups, there is no benefit and at more than five cups a day, there may be potential for harm.
It's unclear why moderate coffee consumption provides protection from heart failure, but the researchers say part of the answer may lie in the intersection between regular coffee drinking and two of the strongest risk factors for heart failure -- diabetes and elevated blood pressure.
"There is a good deal of research showing that drinking coffee lowers the risk for type 2 diabetes, says senior author Murray Mittleman, MD, DrPH, a physician in the Cardiovascular Institute at Beth Israel Deaconess Medical Center, an Associate Professor of Medicine at Harvard Medical School and director of BIDMC's cardiovascular epidemiological research program. "It stands to reason that if you lower the risk of diabetes, you also lower the risk of heart failure."
There may also be a blood pressure benefit. Studies have consistently shown that light coffee and caffeine consumption are known to raise blood pressure. "But at that moderate range of consumption, people tend to develop a tolerance where drinking coffee does not pose a risk and may even be protective against elevated blood pressure," says Mittleman.
This study was not able to assess the strength of the coffee, nor did it look at caffeinated versus non-caffeinated coffee.
"There is clearly more research to be done," says Mostofsky. "But in the short run, this data may warrant a change to the guidelines to reflect that coffee consumption, in moderation, may provide some protection from heart failure."

Monday, 25 June 2012

Mind Reading from Brain Recordings?

'Neural Fingerprints' of Memory Associations Decoded.
Researchers have long been interested in discovering the ways that human brains represent thoughts through a complex interplay of electrical signals. Recent improvements in brain recording and statistical methods have given researchers unprecedented insight into the physical processes under-lying thoughts. For example, researchers have begun to show that it is possible to use brain recordings to reconstruct aspects of an image or movie clip someone is viewing, a sound someone is hearing or even the text someone is reading.

A new study by University of Pennsylvania and Thomas Jefferson University scientists brings this work one step closer to actual mind reading by using brain recordings to infer the way people organize associations between words in their memories.
The research was conducted by professor Michael J. Kahana of the Department of Psychology in Penn's School of Arts and Sciences and graduate student Jere-my R. Manning, then a member of the Neuroscience Graduate Group in Penn's Perelman School of Medicine. They collaborated with other members of Kahana's laboratory, as well as with research faculty at Thomas Jefferson University Hospital.
Their study was published in The Journal of Neuroscience.
The brain recordings necessary for the study were made possible by the fact that the participants were epilepsy patients who volunteered for the study while awaiting brain surgery. These participants had tiny electrodes implanted in their brains, which allowed researchers to precisely observe electrical signals that would not have been possible to measure outside the skull. While recording these electrical signals, the researchers asked the participants to study lists of 15 randomly chosen words and, a minute later, to repeat the words back in which-ever order they came to mind.
The researchers examined the brain recordings as the participants studied each word to home in on signals in the participant' brains that reflected the meanings of the words. About a second before the participants recalled each word, these same "meaning signals" that were identified during the study phase were spontaneously reactivated in the participants' brains.
Because the participants were not seeing, hearing or speaking any words at the times these patterns were reactivated, the researchers could be sure they were observing the neural signatures of the participants' self-generated, internal thoughts.
Critically, differences across participants in the way these meaning signals were reactivated predicted the order in which the participants would recall the words. In particular, the degree to which the meaning signals were reactivated before recalling each word reflected each participant's tendency to group similar words (like "duck" and "goose") together in their recall sequence. Since the participants were instructed to say the words in the order they came to mind, the specific se-quence of recalls a participant makes provides insights into how the words were organized in that participant's memory.
In an earlier study, Manning and Kahana used a similar technique to predict participants' tendencies to organize learned information according to the time in which it was learned. Their new study adds to this research by elucidating the neural signature of organizing learned information by meaning.
"Each person's brain patterns form a sort of 'neural fingerprint' that can be used to read out the ways they organize their memories through associations between words," Manning said.
The techniques the researchers developed in this study could also be adapted to analyze many different ways of mentally organizing studied information.
"In addition to looking at memories organized by time, as in our previous study, or by meaning, as in our current study, one could use our technique to identify neural signatures of how individuals organize learned information according to appearance, size, texture, sound, taste, location or any other measurable property," Manning said.
Such studies would paint a more complete picture of a fundamental aspect of human behavior.
"Spontaneous verbal recall is a form of memory that is both pervasive in our lives and unique to the human species," Kahana said. "Yet, this aspect of human memory is the least well understood in terms of brain mechanisms. Our data show a direct correspondence between patterns of brain activity and the meanings of individual words and show how this neural representation of meaning predicts the way in which one item cues another during spontaneous recall.
"Given the critical role of language in human thought and communication, identifying a neural representation that reflects the meanings of words as they are spontaneously recalled brings us one step closer to the elusive goal of mapping thoughts in the human brain."

Thursday, 21 June 2012

Physicists on Alert for Higgs Announcement



New data on hunt for elusive particle to be presented at Australia conference
Unless you’re the Higgs boson, don’t expect much attention in July when the International Conference on High Energy Physics convenes in Melbourne, Australia.
Rumors of an impending Very Important Higgs Announcement at the physics meeting have already begun invading the Internet, ignited by blogs saturated with speculation and incomplete information about a possible Higgs discovery.
The two teams searching for the elusive particle at CERN’s Large Hadron Collider near Geneva, Switzerland, are keeping quiet.
“Please be patient for a few more weeks,” says Guido Tonelli, spokesman for the Compact Muon Solenoid team. “We have just finished data taking, and people work day and night including weekends to reach a scientifically validated result.” Tonelli expects that CMS will have results ready to present, but says that “the pressure is huge.”
Tonelli cautions that the analysis is morphing almost constantly, saying, “I am very surprised that rumors appear on a subject that is really evolving daily.”
Like Bigfoot, the Higgs boson has evaded detection for decades, despite repeated efforts to flush the particle from its quantum homeland. Physicists invoked the particle in the 1960s as a by-product of the mechanism that explains how other basic particles acquire mass. Now, among the characters in the standard model of particle physics, the Higgs is the last remaining holdout, the only particle still unwilling to reveal itself in particle accelerator experiments.
At CERN, scientists are looking for the boson by smashing streams of protons together, then searching through the debris for the remnants of Higgs bosons. A Higgs produced from the energy of the colliding protons remains intact for so short a time that it can’t be observed directly. Instead, scientists infer its presence from the rubble produced when it falls apart. If enough of these rubble piles add up to a particle of the same mass, scientists can conclude that they’ve seen evidence of the Higgs boson.
Although earlier results from the LHC teams, presented in December, hinted at a Higgs boson with a mass around 125 billion electron volts, there weren’t enough rubble piles to build a statistically significant result. Physicists would feel confident claiming a discovery only when the piles amass to something produced by chance less than once out of 3.5 million times.

Wednesday, 20 June 2012

Early Stars Created a Sight Yet Unseen

Evidence could someday be detected by radio telescopes, study suggests

A 3-D simulation of the early universe suggests that the first stars left a cosmic signature large enough to be read by radio telescopes. 
“It’s a new way to probe the universe when it was very young,” says Zoltan Haiman, a cosmologist at Columbia University, who was not involved in the new work. “We have very few ways to do that.”
Studying early star formation is challenging because the first galaxies were so small — and, because of the universe’s expansion, are now so distant — that even the most sensitive eyes in the sky can’t see them.
But the new simulation, described online June 20 in Nature, suggests that a stellar signature exists in the form of fluctuating radio waves, oscillations produced when young stars and nascent galaxies warm and excite surrounding hydrogen gas. The stars and galaxies in the period simulated, when the universe was 180 million years old, are distributed in a distinct, detectable pattern.
“There’s a prominent weblike structure, meaning that you have clumps of galaxies and nearly empty voids,” says study coauthor Eli Visbal, a graduate student at Harvard University. The clumps and voids should be readily discernible, he notes. “Turns out, it’s much easier than was previously thought to observe these first stars using radio.”
Visbal and his colleagues simulated a cube of space measuring 1.3 billion light-years across. They filled it with hydrogen gas and dark matter, the invisible counterpart to normal matter, and accounted for the recent observation that the two kinds of matter travel at different speeds. These different rates, when combined with varying densities of each substance, affect star formation by stunting growth in some places and promoting it in others. “The dark matter collapses into clumps,” Visbal says. “And the gas, due to the force of gravity, falls into these clumps and forms stars and galaxies.”
But not where the gas is moving too quickly relative to the dark matter clumps, which then have to tug harder to get the gas to come inside. A paucity of gas produces a star-forming void, while dense gas congeals to form clusters of stars and galaxies. Those clusters then heat up and excite the surrounding sea of neutral hydrogen atoms, which emit radiation detectable by radio telescopes.
The telescopes, though, have to be scanning the sky at the right frequency — lower than the band that’s typically used by the most powerful radio detectors.
But future telescopes — maybe even the enormous Square Kilometre Array, under development on two continents  — could do the job. Another option, says UCLA astrophysicist Steven Furlanetto, would be a proposed project called the Dark Ages Radio Explorer, a lunar satellite that could use the moon as a shield against interference by technologies like television and radio.
In the simulation, the researchers focused on a critical epoch in the early universe, Furlanetto notes. “That’s the first moment in which complexity appears,” he says. “Once those first stars form and you get radiation and nuclear fusion and explosions, it gets very, very complex almost instantaneously.”

Technology

Comments

Entertainment

Sport

News World

 
Copyright © 2011. test blog . All Rights Reserved
Company Info | Contact Us | Privacy policy | Term of use | Widget | Advertise with Us | Site map
Template modify by Creating Website. Inspired from CBS News