Our current understanding of the basic building blocks of the universe is the “Standard Model” of particle physics. It is a mathematical theory which basically says that all the matter (“stuff” that makes up you, me, planets, stars and everything else that we can see) and their interactions (forces like electrostatic attraction and repulsion) are made up of a few fundamental particles. This theory was built up starting from the formulation of quantum mechanics in 1920s and completed as a full-fledged description in the 1970s.
According to the “Standard Model”, there are a set of 12 fundamental particles out of which atoms and all matter is built up. These include the electron, the quarks (which are inside protons and neutrons found in the nuclei of atoms) and some exotic particles like neutrinoes. These are truly fundamental particles in the sense that they can’t be broken down into simpler components. Further, the theory says that interactions between these particles (forces between matter) are carried by another set of 4 fudamental particles, which includes the photon (electric and magnetic forces, carrier of light), W- and Z-bosons (responsible for radioactivity) and gluons (responsible for holding quarks together inside protons and neutrons).
This set of 16 components (12 + 4) completely describes all visible matter in the universe and their interactions. Just a note here that the Standard Model does not describe the gravitational force (which attracts two massive objects towards each other and makes the planets orbit around the sun). Physicists have not been able to come up with a consistent theory of gravity which is compatible with quantum mechanics, the foundation on which the Standard Model is constructed. So the Standard Model describes everything except gravity, and we sort of have a separate theory of gravity. The two have not been made mathematically consistent with each other yet.
The Standard Model is the most successful scientific theory ever. It has been tested by various experiments over the past 50 years to unprecedented accuracy. The theory has predicted the existence of various particles and phenomena, and experiments have found them years later. All the 16 fundamental components have been experimentally verified. So everything has been great. Except for one thing, one pretty major thing. In the simple version of the Standard Model, these 16 fundamental particles are all massless (that have zero mass) and all travel at the speed of light. But this is not what we observe in nature. Some of them are massless (like the photon and gluons), but all the others have been measured to have widely ranging masses. The electron is very light but has a non-zero mass. The lightest quark has a mass almost the same as the electron, but the heaviest quark has a mass 100,000 times that of the electron. What gives fundamental particles mass of such a wide range? What endows particles with mass? This was the question that the original Standard Model failed to address.
In the 1960s, six different theoretical physicists (one of which was the Englishman Peter Higgs) working at different places tried to answer this question. They postulated that all of space is filled with this “field” which interacts with the fundamental particles. Think of it as sand or some viscous liquid like honey. Over time, for some reason this became popularly known as the “Higgs field”. The basic idea is that fundamental particles moving through space are like beads being pulled through honey. Light particles like the electron interact with this field negligibly, so can fly across space fast and have less mass. Massless particles like the photon donot see this field at all. Heavy particles feel a massive drag from this field and acquire a large mass. The Higgs field (which permeates all of space, from inside the atoms making up you and me to the farthest galaxies) seemed to be a good mathematical description and a good explanantion for why particles have mass. In accordance with quantum mechanics, this Higgs field was supposed to be composed of fundamental particles called Higgs bosons.
So in order to explain why particles have mass, the Standard Model had to include one more component to the earlier 16, the Higgs boson. And this was the only component not confirmed by experiments, until now. Two independent experiments at the LHC (Large Hadron Collider) have reported the discovery of a previously unknown particle whose behavior is consistent with the Higgs boson. The LHC is a 27-km long underground tunnel beneath the Franco-Swiss border where protons travelling at nearly the speed of light collide head-on with each other. With the discovery of the Higgs, the Standard Model stands complete as the decription of physical reality in the visible universe (with the exception of gravity of course). The final piece of the puzzle. A true testament to human endeavour that we can understand so much about the workings of nature given that we ourselves are a product of it.
Regarding the term “God-particle” in reference to the Higgs boson, most physicists really consider it as a misnomer. It was coined by a Nobel prize winning physicist, Leon Lederman, in his popular science book of the same name. He sort of equated the importance of the Higgs, which endows all particles with mass, to something akin to a God who endows the universe with its properties. The popular media is of course just waiting for a catchy phrase like this, and jumped on the bandwagon. The Higgs and its discovery doesn’t really say anything about the existence or non-existence of God (if you can define the term “God” in the first place).
Lastly, scientists – unlike the media – are very careful regarding their claims and discoveries. So the status right now is that they say that a “Higgs-like” particle has been discovered. The only certain thing they are claiming is that a new particle has been discovered. Experiments so far suggest that it is most likely the Standard Model Higgs, but to confirm that more data is required. There are also various different versions of the Higgs theory (in one case, there are 5 different Higgs bosons postulated) and it is not yet clear which exactly the discovered particle corresponds to. To summarize, a great achievement for humankind and a cause for celebration and reflection. However, the quest for deeper knowledge continues in earnest!
Recent work in linguistics strongly suggests that almost all of the 5000-odd current human languages may have been derived from a single ancient proto-language. In a fascinating statistical study of the syntactical structure of human languages, Nobel-prize winning physicist Murray Gell-Mann from the Santa Fe Institute along with linguist Merritt Ruhlen from Stanford University conclude that the basic word-order in this proto-language would most certainly have been SOV (Subject-Object-Verb).
Every language has a syntax that determines the basic word-order of meaningful sentences. For example, the authors illustrate the SVO (Subject-Verb-Object) ordering of modern English with the sentence – “the man (S) killed (V) the bear (O).” There are six possible word-orders (SOV, SVO, VSO, VOS, OSV, OVS), out of which only three are commonly found (SOV, SVO, VSO). SOV is the most common order, found in German, Hindi, Japanese, Persian and Tamil, followed by SVO that accounts for languages such as Chinese, English, Greek, Hebrew and Swahili. VSO is found in languages like Irish, Welsh, Tagalog and Maori.
Archeological evidence points to the sudden appearance of strikingly modern behaviour in humans around 50,000 years ago in the form of sophisticated tools and art like painting, sculpture and engravings. A possible reason for this could be the development of a fully modern human language, the proto-language that eventually gave rise to all the current languages.
Gell-Mann and Ruhler analyzed the distribution of word-orders in a sample of 2,135 languages, classified into seven major families. They conclude that five of them (Congo-Saharan, Indo-Pacific, Australian, Dene-Caucasian, Nostratic-Amerind) were originally SOV, one (Khoisan) must have been either SOV or SVO, and another (Austric) was SVO. This strongly favours the proto-language being SOV. The extant SOV languages have apparently not changed their structure since their origin.
Some languages permit more than one word-order. Russian, for example, can have all of the six possible orders, although its basic order is SVO. The authors looked at 125 different languages that have two competing word-orders, and found that the most common combination was SOV/SVO, followed by SVO/VSO. They propose a linguistic arrow of time, where languages primarily evolve from SOV to SVO to VSO (or VOS).
Specific sub-families of languages like the Indo-European, Anatolian, Uralic, Nostratic, Dravidian and Afro-Asiatic were analyzed in detail and shown to most likely have a SOV origin, accounting for changes occuring due to the influence of other languages in geographic proximity. For example, almost all of the numerous languages from various families used in India have a SOV structure.
The Amerind family contains languages with all the six possible orders. Even here, all the branches have atleast some SOV languages. Further, sub-families like the Andean and Macro-Carib contain mostly SOV languages, along with a few rare OVS and OSV languages. No other word-order occurs in these families. This suggests that the rare OVS and OSV orders derive directly from the original SOV (unlike the VSO/VOS orders which derived from SVO that in turn derived from SOV).
Analysis of the Austronesian branch of the Austric family further revealed that some languages may revert back to SVO order from the VSO/VOS order, and that they can oscillate back and forth between the two word orders.
Gell-Mann, M., & Ruhlen, M. (2011). The origin and evolution of word order Proceedings of the National Academy of Sciences, 108 (42), 17290-17295 DOI: 10.1073/pnas.1113716108
Recording our dreams when asleep and then watching them as movies when awake – it surely must be an idea fantasized by many of us. An experiment conducted at the University of California, Berkeley now shows that this may not be as far-fetched as it sounds.
The Berkeley researchers reconstructed movies of visual experiences of people from their measured brain activity. In essence, they were able to ‘see’ what the subjects had seen by just monitoring the activity in their visual cortex, an area in the back of the brain responsible for processing visual information.
Using functional Magnetic Resonance Imaging (fMRI), the scientists scanned areas of the visual cortex where neurons showed a high rate of electrical activity while three subjects watched hours of pre-determined movie clips (control data set). Active neurons spend more energy and thus induce an increase in the local flow of blood carrying oxygen-rich haemoglobin.
fMRI can detect the difference in magnetic properties of oxygenated and de-oxygenated haemoglobin, the so-called BOLD (Blood-Oxygen-Level Dependence) signal. Typically, the BOLD signals vary slowly with time, with changes occuring on the timescale of several seconds. In order to track fast dynamic changes occuring in neural activity while watching movies, the scientists decoded signals recorded from the control data set of movies using sophisticated signal and image processing techniques. The decoding method was able to identify the specific movie stimulus which induced a particular BOLD signal with more than 75 percent accuracy.
The researchers created a ‘dictionary’ of fundamental brain activity patterns associated with unique shapes, edges and motion. It was found that different areas (voxels) of the visual cortex are sensitive to different kinds of visual stimuli. For example, voxels responding to direct, head-on views were able to track only static or slow-moving images. On the other hand, voxels responding to peripheral views preferred to track high-speed motion.
A random library of clips (separate from the control set) was built from 5000 hours of YouTube videos, and the ‘dictionary’ of basic patterns was used to predict the brain activity evoked by these clips. A new test set of movies was then shown to the subjects, and their brain activity was recorded using fMRI. Statistical techniques were used to identify 100 clips from the library whose predicted brain activity was most similar to the measured brain activity for each clip in the test set. These 100 clips were averaged together to generate a reconstruction of the visual experience of the subjects corresponding to each test clip.
The video at the top of this post shows some of the reconstructions – while they are not exactly HD quality, the qualitative features of the images are captured quite well. The authors propose an improvement in the quality of the reconstruction by having a larger library of clips to select from. They further suggest the tantalizing prospect of employing their technique to decode dynamic involuntary subjective mental states like dreaming or hallucinating.
Nishimoto, S., Vu, A., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. (2011). Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies Current Biology, 21 (19), 1641-1646 DOI: 10.1016/j.cub.2011.08.031
In the legendary Star Wars saga, the two main characters Luke Skywalker and Anakin Skywalker (Darth Vader) hail from the fictional planet Tatooine, which orbits around two suns. The video above shows two beautiful scenes of binary sunsets on Tatooine from episodes IV and III respectively.
NASA scientists have now discovered such a circumbinary planet orbiting a binary star system using their Kepler space telescope. Kepler, with its 1-meter big mirror, monitors the brightness of more than 150,000 stars in the constellations Cygnus and Lyra. A transiting planet is detected as a tiny blip in the brightness of a star (as small as a part in one thousand), as the passing planet causes a miniature eclipse. This happens in the lucky situation when the orbital plane of the planet is oriented edge-on as viewed from earth.
The star system Kepler-16 was identified as a binary by the detection of mutual eclipses with a 41-day period. The total brightness of Kepler-16 was found to fall by about 15% every 41 days, as star B partially eclipses the bigger and brighter star A. About 20 days after every such occurence, the total brightness of Kepler-16 was found to reduce by about 1.5%, as the smaller star B was occulted by the bigger star A. The stars A and B are thus revolving around their common center of mass.
Apart from the primary and secondary stellar eclipses, two further partial eclipses were found to occur at a period of about 229 days. The total brightness dropped by about 2% and 0.1% respectively, as the planet Kepler-16b transited star A and then star B every 229 days. These eclipses varied in period about the mean by about 10 days, because stars A and B were in different positions on their mutual orbit each time the planet moved in front of them.
Based on the colors of light emitted by Kepler-16, star A was determined to be about 70% of the size of our sun, while star B was determined to be only 20% as big as our sun. Conclusive evidence for Kepler-16b being a planet came from measuring the deviations in the period of the primary and secondary stellar eclipses. They were found to depart from strict periodicity by about one minute, caused by the gravitational pull of the planet on the two stars. The planet was deduced to be similar in size to Saturn, but with a higher density. It presumably contains 50% gas (hydrogen and helium), and 50% heavy elements (ice and rock).
Although the mean distance of the planet Kepler-16b from its two parent stars is only 70% of the distance of the earth from the sun, its surface temperature is estimated to be between -100 and -70 degrees Celcius. The planetary orbit was found to be exactly coplanar with the stellar orbit, suggesting that the stars and the planet shared a common origin.
Doyle, L., Carter, J., Fabrycky, D., Slawson, R., Howell, S., Winn, J., Orosz, J., Pr sa, A., Welsh, W., Quinn, S., Latham, D., Torres, G., Buchhave, L., Marcy, G., Fortney, J., Shporer, A., Ford, E., Lissauer, J., Ragozzine, D., Rucker, M., Batalha, N., Jenkins, J., Borucki, W., Koch, D., Middour, C., Hall, J., McCauliff, S., Fanelli, M., Quintana, E., Holman, M., Caldwell, D., Still, M., Stefanik, R., Brown, W., Esquerdo, G., Tang, S., Furesz, G., Geary, J., Berlind, P., Calkins, M., Short, D., Steffen, J., Sasselov, D., Dunham, E., Cochran, W., Boss, A., Haas, M., Buzasi, D., & Fischer, D. (2011). Kepler-16: A Transiting Circumbinary Planet Science, 333 (6049), 1602-1606 DOI: 10.1126/science.1210923
The Lynx, named for its bright, reflective eyes, belongs to the magnificent wildcat family. The Iberian Lynx is found exclusively in isolated pockets of the Iberian peninsula, mainly in Spain. With less than 300 individuals remaining, it is one of the most critically endangered species on earth. Its population has reduced dramatically in the last century, attributed mainly to the decline of its main prey species, the European rabbit, and the loss of its habitat due to human activities.
Current individuals of the Iberian Lynx are unusually similar at the genetic level, with extremely low variation in DNA. DNA is a macromolecule containing a long sequence built out of four basic units – A,G,T,C - and is present in every cell of an organism. It is akin to a software that ’codes’ the structure and functioning of the entire organism. The DNA sequence of individuals within a species is very similar, but there are notable differences leading to slightly different traits. Most of the DNA resides in a region of the cell called the nucleus, but some of it is also found in the mitochondria, the powerhouse of the cell. Mitochondria break down food to a form which can be used by the cell for energy.
The mitochondrial DNA (mtDNA) has a ‘control region’ which shows the most variation among individuals of a species. However, the current population of Iberian Lynx shows almost no variability in the mtDNA control region sequence. This lack of genetic diversity is generally feared to be detrimental to the future survival of the species, putatively resulting in inbreeding and reduced adaptability to the changing environment. Low genetic diversity is usually attributed to a ‘population bottleneck’ i.e. a period in history where the population was reduced to only a few individuals because of climatic or other environmental conditions.
Such a ‘population bottleneck’ for the Iberian Lynx could have occured about 10,000 years ago, at the end of the last glacial period. Or it could have just occured in the last century, when the population has fallen dramatically. A team of researchers from Spain, Denmark, UK and Sweden have now analyzed bone and teeth samples of 19 different Iberian Lynx individuals, collected from different areas in Spain and spanning a time from 50,000 years ago to the last century. The samples were powdered and dissolved in solution, from which DNA was extracted and its concentration amplified through biochemical reactions.
The scientists compared a region of length 183 units in the mtDNA, and found no variation among the 19 individuals. The sequence is also the same as that found in the contemporary population. The genetic variation within a species is determined by the mutation rate of DNA and the population size. The same variation can be attained with a small mutation rate and a large population size, or a higher mutation rate and a small population size. Mutations in the DNA generally occur at a constant rate, which is known as the ‘molecular clock’. For wildcat species like the Lynx, the DNA is estimated to change at a rate of 5-25% per million years.
The authors combined their data with that of 26 other Iberian Lynx taken previously and performed extensive computer simulations to estimate the evolution of population size over the last 50,000 years. They conclude that there is a very high probability that the population of the Iberian Lynx has remained relatively low throughout the past 50,000 years, comprising of less than 8000 females at any given time. Thus it seems that a lack of genetic diversity may not be such a great threat to the survival of this great cat, and such fears should not dilute conservation efforts.
RODRÍGUEZ, R., RAMÍREZ, O., VALDIOSERA, C., GARCÍA, N., ALDA, F., MADURELL-MALAPEIRA, J., MARMI, J., DOADRIO, I., WILLERSLEV, E., GÖTHERSTRÖM, A., ARSUAGA, J., THOMAS, M., LALUEZA-FOX, C., & DALÉN, L. (2011). 50,000 years of genetic uniformity in the critically endangered Iberian lynx Molecular Ecology, 20 (18), 3785-3795 DOI: 10.1111/j.1365-294X.2011.05231.x
Imagine sound emanating from a source (e.g. a speaker), which you can hear when you are facing towards one side of the source, but is inaudible when you are facing towards the opposite side of the source. Researchers at the California Institute of Technology have now developed an “acoustic rectifier” that allows certain tones to travel in one direction through it, but not in the opposite direction.
Sound is basically a ‘pressure wave’, which travels though a medium by inducing periodic regions of compression (where the particles in the medium are pushed closer to each other) and rarefaction (where the particles in the medium are pulled away from each other). The rate at which these regions of compression and rarefaction ‘vibrate’ sets the tone or frequency of the acoustic wave.
The scientists created a linear array (chain) of 19 stainless steel spherical particles stacked end-to-end. All the spheres had the same mass (30 grams) and radius (1 centimeter), except the second one from the left that had a smaller mass (6 grams) and radius (0.6 centimeters). The arrangement acts like a ‘nonlinear medium’, where different acoustic frequencies can ’mix’ with each other due to the large coupling between the particles. This is because the force acting on one sphere affects the neighbouring spheres also.
Acoustic waves were generated by an ‘actuator’ that periodically compressed the array from one end, at a certain rate or frequency. A constant force pushed the array from the other end. Two configurations were studied – forward (the actuator or source at the left end and the constant force on the right end) and backward (the actuator at the right end and the constant force on the left end). Sensors were embedded on a couple of spheres (one near each end) to measure the compressive force acting on them in real time, in order to study the passage of the acoustic wave through the chain of spheres.
The chain of spheres acts like a low-pass filter, allowing low-frequency or more bass tones to propagate through the medium, while blocking more treble tones having frequencies higher than the ‘cutoff frequency’. The inhomgeneity of the chain (due to the presence of the smaller sphere) also creates a ‘localized’ mode at a frequency higher than the cutoff – this tone falls off in intensity exponentially fast away from the small sphere.
The actuator is used to generate an acoustic wave at a frequency close to that of this localized mode. For the backward configuration, there was no acoustic energy detected at the other end and the acoustic wave was completely attenuated. For the forward configuration, the ‘localized’ mode of vibration is excited (since the small sphere is very close to the actuator), which then transfers energy to more bass tones due to nonlinear mixing. The sound at the high tones is thus essentially converted to frequencies lower than the cutoff, which can then propagate through the chain. The forward transmission of acoustic energy is thus much higher compared to the bacward transmission. The range of tones transmitted can be easily controlled by the strength of the constant force acting on the chain, at the end opposite to the source.
The authors propose that such acoustic rectifiers may be used to build ‘logic circuits’ to process sound directly, and could potentially be employed in ultrasound imaging for biomedical applications, in addition to enhancing energy harvesting technologies.
Boechler, N., Theocharis, G., & Daraio, C. (2011). Bifurcation-based acoustic switching and rectification Nature Materials, 10 (9), 665-668 DOI: 10.1038/nmat3072
Almost all life on earth today depends on oxygen for its survival, as it is the agent responsible for respiration and metabolism i.e. the breakdown of food for production of energy. There are some species of anaerobic (without oxygen) bacteria though, living in hot springs and hydrothermal vents on the ocean floor, that survive on sulphur and its compounds.
Scientists at the University of Western Australia and the University of Oxford in UK have now discovered the oldest fossilized cells known to date, which seem to be 3.4 billion-year-old sulphur-metabolizing bacteria. These cells are evidence for life on earth just 1 billion years after our planet’s formation, when the atmosphere was devoid of oxygen. The relatively well-preserved cells were found in areas rich in pyrite (iron-sulphide) in the old sandstone rocks of the Strelley Pool Formation in the Pilbara region of Western Australia.
The carbon-rich spheroidal and ellipsoidal microfossils ranged in size from 5 to 25 microns (a micron is a millionth of a meter), and were only found in pyrite (sulphide) rich regions of the sandstone. Clusters and chains of cells, frequently seen in other bacterial fossil remains, were observed that are suggestive of successful cell division. Cell walls rich in carbon and nitrogen were clearly visible, but they were damged or punctured at many places indicating the release of intracellular material leaving behind ‘hollow’ interiors.
Carbon isotope tests, which measure the relative concentrations of C-13 and C-12, also confirmed the organic nature of the fossils. C-12 is the most stable and abundant form of the carbon nucleus, containing 6 protons and 6 neutrons, while C-13 is a rare isotope containing an extra neutron. Organic matter has a lower C-13 to C-12 ratio than inorganic or non-living matter, and the microfossils indeed showed a low concentration of C-13 corroborating their biological origin. Optical spectroscopy tests further revealed that the carbon was in a disordered form, ruling out abiotic carbonaceous material like graphite.
Sulphur was found to be present locally in the cell walls, and there was a high concentration of micron-scale pyrite grains in the vicinity of the cells, putatively formed from the metabolic by-products of the sulphur-consuming bacteria. Moreover, the isotopic concentrations of sulphur (S-33 and S-34) were found to be consistent with microbial processing of sulphur and its compounds e.g. the reduction of sulphates. This is strong evidence for a primitive ecosystem of unicellular organisms living on sulphur-rich sediments in an oxygen-less earth.
Wacey, D., Kilburn, M., Saunders, M., Cliff, J., & Brasier, M. (2011). Microfossils of sulphur-metabolizing cells in 3.4-billion-year-old rocks of Western Australia Nature Geoscience DOI: 10.1038/ngeo1238