Posts from the ‘Physics Friday’ Category

Stop the violins!

DH sent me a link to an article the other day, which I found quite interesting. And as the science of acoustics is quite physics related, I thought I’d share this with you as well for Physics Friday!

I’ve played the violin for about 17 years now (wow, that’s longer than I really care to admit!), and as a player of a stringed instrument, I am quite familiar with the occupational chatter about famous makers and their antique instruments. In fact, most people are aware of the concept of high-priced old violins; every time someone finds an old one in the back of a closet, they are convinced they have discovered a new Stradivarus and that it’s worth millions of dollars. Unfortunately for them, that’s rarely the case, but the Stradivarius has been so legendary over the past century that contemporary violin makers have copied the design ad nauseum, even putting the name on their labels. No wonder people do get confused; the word “Stradivarus” is clearly displayed  inside just about every instrument!

With all the hype about these upper echelon makers in the music community, I’ve often wondered why exactly these instruments are so fabulous. Evidently, that’s a very good question, as nobody has been able to systematically quantify the specific characteristics of the instruments that make them so supposedly superior to all others. Moreover, it’s not at all unreasonable to submit that there might be instruments out there that perform just as well. I know, blasphemous talk! However, this recent article on Ars Technica describes an experiment that puts this assertion to the test.

Twenty-one experienced violinists were each given six violins to play–two Strads, one Guarneri, and three “new” instruments no more than a few years old. The subjects played them to test the feel and sound while wearing special glasses that obstructed their view of the instruments, so as to eliminate any bias from seeing the violin and immediately recognizing it. It’s a small sample size, yes, but please do do consider:

‘ The sample sizes here are admittedly small, but as the paper notes, “it is difficult to persuade the owners of fragile, enormously valuable old violins to release them for extended periods into the hands of blindfolded strangers.” ‘

DH and I both had to LOL at this line. 😉

The result was that the older violins were not necessarily preferred over the newer models. In fact, when asked which instruments the players would theoretically prefer to take home, only eight of the twenty-one would have chosen one of the three “golden age” instruments.

DH and I both agreed we would have liked to have seen a test with observers in the audience comparing their overall impression of someone playing a song on different instruments. I’m sure there are many other qualitative tests that could be done, as well as scientific research about the physical parameters of instruments–wood variety, wood thickness, varnish, shape and acoustics–that might affect their overall tone and performance.

So, the test is far from definitive, but I believe that it’s compelling enough to warrant some rational thought as to the supposed worth of these instruments. I’m not saying that they are not good instruments or are not valuable at all; their legendary status must have evolved for some reason. However, it’s a little silly to assume hands down that no instrument constructed outside of this “golden age” can match these famous makers. This is encouraging to me; while the “test” instruments in this study were still over $30,000 each, that’s at least in the realm of realistically obtainable for an average Joe. Furthermore, while my two violins are valued at a fraction of even this lower price, that doesn’t mean that they can’t hold their own. Plus, both of them have much more value than just a price tag to me.

Again, the full article can be read here: http://arstechnica.com/science/news/2012/01/million-dollar-violins-dont-play-better-than-the-rest.ars

The 10,000 Year Clock

Today I read about something that I was honestly quite shocked to have never heard of before. It seemed like the perfect topic to feature here on Physics Friday.

Evidently, in the desolate mountains of west Texas, a site is being prepared for a monumental clock that is designed to keep time on its own for ten thousand years. The obvious questions is, why on earth do we need a clock like this? According to the clock’s inventor, Danny Hillis, part of the premise is just to get people to ask the question and bring awareness to the idea of the long term. It’s hard to refute that our current culture is fast-paced and focused on the now. We don’t know the future, but does that mean we can completely neglect all thought of it? It’s evidently about our generation interacting with generations not even imagined yet. Plus, I’m sure the challenge of such a large-scale project is satisfying on many levels to those who are working on it.

The idea of the clock was established in 1995 by electrical engineer Danny Hillis, and with funding from the billionaire founder of Amazon.com, Jeff Bezos, the clock is becoming a reality. The site is being precision excavated inside of a mountain, and the pieces of the clock are coming together in California. To construct a completely mechanical object that will run continuously for so long presents many challenges in design, choosing materials, and sustainability.

For the clock to run continuously for 10,000 years, it has to be robust. The all-mechanical system is made of durable materials such as stainless steel, titanium, and ceramic; however, none of these materials has been tested to a fraction of this timescale. The chamber will be as air-tight as possible, though eventual gumming of the unoiled gears is always a possibility. The pendulum is designed with a period of ten seconds instead of one; this reduces the overall wear by a factor of ten.

For the clock to run continuously for 10,000 years, it has to continuously tick on its own with no external human assistance. To accomplish this, the clock is synchronized to the outside world, essentially to the cycles of days. This will most likely be done by detection of changes in the outside temperature between day and night, which will be a variation of tens of degrees for the remote desert location. Recalibration involves slight speeding or slowing of the pendulum. Human interaction is only needed to operate the clock chimes and show the actual time. The ten chimes are controlled by “the world’s slowest computer”–twenty Geneva wheel gears that generates a unique melody each time instead of the same tune over and over; with 3.5 million combinations, a chime will never be repeated. And instead of having the clock face constantly update with each tick, the current time will only show when certain gears are wound. Thus, the next visitor to the clock will always know when the previous visitor was there.

In order to visit the clock, it’s a day-long trek into the desert and back. But it’s important to consider that at some point during the next 10,000 years, the clock might be long forgotten. That’s an important consideration. The design of the clock had to be such that someone who possibly just stumbles upon it would intuitively know what it is and how to work it. Thus, the design of the experience of going to the clock was designed even before the actual clock itself.

But there are so many more questions. In ten thousand years, will humans even still exist? Will they even be able to perform a physical task such as winding the gears? What about catastrophic change of the face of the planet…might something happen that changes the geography so drastically that the cave and clock are ripped apart or submerged under an ocean? The answers to those questions are unknowable, of course, but that does not stop the clock designers from strongly believing that a project on this physical and philosophical scale is important to undertake. For me, I understand the importance of living today while also preparing for the future, but I can’t say that I have totally wrapped my mind around this whole concept quite yet. Due to my personal beliefs, I might be inherently skeptical of our world lasting that long, but I certainly won’t tell you that it’s entirely impossible, because that’s impossible to know.

So, I’m still pondering my thoughts about this project, and I hope you will, too. I do think that things like this are important to ponder, at least for a little while. I encourage you to read more about it, as I have only given you a brief glimpse of this project here.

http://longnow.org/

Faster than the speed of light?

You might have seen an article about it somewhere in the last day or so, but such a controversial topic in physics can’t be neglected here! Evidently, researchers at CERN, a high-energy research facility near Geneva, have observed that a type of neutrino, a super small sub-atomic particle, beamed to a receiver facility in Italy is actually making the trip about 60 nanoseconds sooner than a particle traveling at the speed of light. This means that these particles appear to be traveling faster than the speed of light–a claim that, if true, would rock the foundations of the last century of modern physics.

Einstein’s theory of relativity is fundamentally based on the assertion that nothing can travel faster than the speed of light. To do so has multiple ramifications, the most tantalizing of which is that of time travel. While we shouldn’t be dusting off our Delorians yet due to this announcement, if this claim is proven to be a reality, not only will proponents of time travel be somewhat vindicated at least in principle, the whole of modern physics as understood by 20th and 21st century scientists must be completely reviewed and possibly revised.

Rest assured, however, that nothing can be definitively concluded just from this claim alone. This was a phenomenon observed multiple times over three years by one group, evidently solid enough for them to come forward to the scientific community as a whole as if to say, “well, what now? Let’s all scratch our heads a bit over this one” while not making a bold, radical claim lest a reasonable explanation comes up and they look foolish (which has happened a few times before in science history!). Now that it’s out there, many groups of researchers will put their efforts to ferreting out what’s actually going on, and maybe one day we’ll have an explanation of the matter.

History is peppered with many scientific discoveries that were utterly shocking and seemingly unthinkable at the time. Claims that the earth is round, not flat. The idea that it rotated around something, not all things around it.  Quantum mechanics. Every time, the models we had constructed and that worked for us for centuries, even millennia, were shown to be inadequate as new breakthroughs in technology allowed us to dig deeper into realms never before explored. This could very well be another of these events; however, it’s just as likely to be a false alarm. Either way, rigorous experimentation and corroboration of evidence will eventually show one way or the other whether the speed of light is the ultimate limit in the universe, or if the rules are, indeed, made to be broken.

Spherical cows and being wrong

“Why my fellow physicists think they know everything and why they’re wrong”

It’s a busy day in the lab, so I thought I would just quickly share an opinion article that I read this week about physicists and their views on subjects outside their areas of expertise. The author argues that physicists know just enough to be dangerous and not quite enough to be right when it comes to other areas of study. This may or may not actually be the case, but you can read his arguments and decide for yourself.

Personally, I try very hard not to expound upon things outside my knowledge so as to avoid this sort of thing. And when describing physics topics to you, I also try exceedingly hard to explain them correctly, even if it’s on a very basic level while neglecting the more subtle bits that would be important if you were actually trying to use it in an experiment. The wonderful thing about physics is that most of the time approximating a spherical cow gets you close enough for a solid understanding anyways. 😉

Most of the time, it can hard for a scientist to say that they don’t know–which is quite silly, actually! The reason we do scientific research is because we don’t know. If we knew, why would we spend long ours in a cold, dark, and dreary lab trying to figure out the answers? Similarly, if someone asks me about quantum mechanics or general relativity, I just have to honestly say that I’ve spent too many long hours in a cold, dark, and dreary lab with a laser trying to understand frequency stability and haven’t quite gotten around to the QM or general relativity yet. Given my basic knowledge of all areas of physics, I can possibly give you a “spherical cow” answer, but I won’t pretend I know and give you a pile of bull crap. 😉

Physics Friday: Atomic Aplomb

For today’s Physics Friday, let’s take a look at some basics properties of atoms. Understanding this fundamental building block of all matter will be a stepping stone to other interesting ideas we will explore together.

An atom is the most basic individual unit of matter in the universe. It is made from protons and neutrons (which form the nucleus) and electrons (essentially “orbiting” around the nucleus). Protons have a positive electrical charge, electrons have a negative electrical charge, and neutrons are not charged. The number of protons and neutrons in the nucleus defines the kind of atom, which we call an element (as seen on the Periodic Table). There are something like 118 elements as of 2011; 94 are naturally occurring while the others have been forcibly created in a laboratory and usually only last for fractions of seconds. The atomic number (the big one you see on the Periodic Table) labels the element by virtue of the number of protons in the nucleus. For example, Hydrogen (atomic number of 1), has one proton in its nucleus (in fact, it’s the only element with no neutrons). Likewise, the fifth element, while also a Bruce Willis movie, is more technically the element Boron, which has five protons.

Along with the protons and neutrons in the nucleus, the only other atomic ingredients are electrons. As mentioned above, they circle around and around the nucleus of an atom, essentially held in orbit by the positive-to-negative electronic attraction with the protons of the nucleus. The number of electrons increases as the number of protons increases, usually keeping a net charge of zero for the atom. However, it is possible for an atom to gain or lose electrons, giving it a net positive or negative charge. These atoms are called ions, but as long as the number of protons remains unchanged, it is still the same element regardless of the charge.

This is where things take a left turn from pretty normal and comprehensible and go careening into the strange and perplexing world of quantum mechanics. Since I intend this blog to be fun as well as informative, I won’t go into all of the gory details here. Instead, I’ll give you a practical working model of how electrons behave as they move around the nucleus in terms of energy.

As electrons move around the nucleus, they are, by reasons unknown to mortal man, confined to circling in certain fixed “orbits,” if you will. Each orbit is described by a certain amount of energy required to sit there and is referred to as an energy level.  Every energy level is separated by a specific amount of energy. An electron that’s kind of lazy (has less energy) will stay in the lowest energy level, which is called the ground state. An electron that has more energy can be found in an orbit further away from the nucleus.

Generally speaking, electrons fill up these energy levels just like cars take up parking spots in a parking lot–they fill from the inner, lower energy levels to the outer, higher energy levels, just like most people park in the spots closest to the door first (this is called the Aufbau Principle; for some reason, I always remembered the name of it!). Sometimes, electrons hanging out by themselves in outer levels can be coaxed to leave the atom, creating the aforementioned ion. Electrons floating between atoms are the source of electrical current. In fact, current is just defined as the flow of electrons down a certain path, like a wire in a circuit.

If an electron in a lower energy level receives a kick of energy exactly equal to the energy difference between its current level and a higher level, the electron absorbs the energy and jumps up to that level, if there isn’t an electron already there occupying it. This process is called exciting the atom. However, the energy coming in has to be exactly equal to an energy difference in order for the electron to jump; there’s no place for it to jump halfway between if no energy level exists there. Once the atom is in the upper energy level for some amount of time, it will eventually hop back down to a lower energy level, releasing the energy it absorbed in the first place.

The source of this incoming and outgoing energy are photons, but, in order to keep things in bite-sized pieces around here, we will talk about these more in depth in another episode. But this excitation of electrons and subsequent hopping betwixt energy levels is the main idea I wanted to get across today, as it will be crucial to our understanding of lasers, which is the goal toward which we are building with many of these Physics Friday posts. Just a few more pieces, and we’ll put it all together!

Let’s make some noise!

Today for Physics Friday, let’s make some noise! Specifically, let’s talk about signals and noise, what they are, and why we care about them.

Most of the time, we are primarily interested in signal. I would broadly define a signal as any source of information that is of interest to the observer. For instance, a satellite can beam a signal down to your television that you can watch. A doctor listens to your heartbeat through a stethoscope. An antenna collects radio waves and sends the resulting music through your car speakers.

However, I’m sure we are have faced situations where, for example, we weren’t able to actually hear the song playing on the radio because of the static being broadcast through our speakers as well. That static is an example of noise….not just because it’s noisy in an audible sense, although the ideas are definitely related. Noise is any source of information that is not related to the desired signal yet competes with it for attention. In that case, you might consider yourself to be the signal and your little brother to be the noise.

In a more technical sense, a signal is usually an electronic pulse defined by a changing voltage as it travels through wires of a circuit or other electronic devices. If you remember back to our discussion of frequency and sine waves, then you can envision most electronic signals as having voltages that vary in time like the sine wave. There are other shapes of electronic signals, but we’ll stick to this one in our analysis here. Noise on a sine wave is considered any fluctuation in the amplitude or zero crossing (time, or phase) from the expected value.

Often, you will hear people discussing the signal-to-noise ratio (or SNR). This parameter describes exactly what it says: in compares the amount of signal you have to the amount of noise you have. For audible signals, you would compare the loudness of the signal compared to the loudness of noise. For a light signal, you might compare the brightness or intensity. For an electronic signal, you would compare the voltages  (which, if you were curious, is related to the power by Ohm’s Law: P=V^2*R, where R is the resistance in your circuit). As with any fraction, if your signal on top is really big and your noise on bottom is really small, then the signal-to-noise ratio is big, which is generally desired. However, if noise begins to get too big on the bottom of the fraction, the overall fraction goes down. This would result in the aforementioned dilemma of not being able to see the signal as well: i.e., not hearing the song on the radio due to static.

Simplistically, we would just continue to crank up the signal to increase the SNR. However, most systems that generate signals are complex, and putting more energy into the system to crank up the signal will inevitably cause more sources of noise to appear and get larger as well. Therefore, this isn’t always an option.

Day to day for you and me, we get plenty of signal and are able to drown out the noise. But in scientific applications that require lots of precision, noise can be a big problem. For clocks, this means that you and I will know what time it is well enough to get to work on time, but for something like GPS that needs many decimal places of clock accuracy or for a signal that is very faint, noise can start to be a problem. It’s very easy to compete for attention at seventeen decimal places! It’s like trying to go to sleep with someone mowing the lawn right outside your window…something you would barely notice or hear when awake and surrounded by normal noises during the daytime. When the overall signal goes down, the noise is much more pronounced.

In my research, my bread and butter is analyzing the noise that gets in the way of our signals. Not only do I try to locate the sources of noise, I also characterize what kind of noise it is and, ultimately, try to get rid of it! Indeed, my overall thesis topic is that of generating low noise signals from optical (laser) sources.

A colleague of mine who is older and has had an illustrious career in astronomy and physics is fond of saying that, as a youth, he only cared about signals (that is an astronomer’s bread and butter!). However, as he got older, he began to find that the noise that he ignored became more and more interesting and important to him. Now he looks at noise almost exclusively. He told me I was very mature and wise beyond my years to get interested in noise now while I am young. 😉 Granted, I did just fall into this area of research without much forethought, but it indeed has been interesting with benefits to all areas of science. My friends in other areas of physics occasionally ask me about it, and I get queries from all over the world at conferences or in response to journal papers about my work.

In some ways, looking at noise is kind of like being the exterminator of the science world, getting rid of all the pests that get in the way of someone’s main agenda. But as technology pushes toward fundamental limits, noise becomes a very important issue. Fortunately for science, I’m here with the noise flyswatter!

Before this analogy gets any sillier, I’ll just abruptly wrap up this edition of Physics Friday. Stay tuned in coming weeks for more science topics you never thought you’d want to know!

 

SI units: the bane of 300 megaAmericans

Today, I wanted to spend Physics Friday giving you an introduction to SI units. We’ve used them in our past physics and science discussions, but I wanted to give you a formal explanation of what they are, why we have them, and how they are defined.

Units are “ticks on a ruler” for different quantities that one might want to measure. Day to day, we might measure volume, weight, length, temperature, and time, and we assign a unit to each quantity to describe how much of each we measured. Now, historically there have been as many units as there have been societies–each culture defined their own weights and measures using artifacts that were relevant to them. For instance, the length of a monarch’s foot might define the basic unit of length for the kingdom (hence the name of our common unit today!).

These units were not standardized between societies and sometimes not even within a society. Multiple  sticks representing ye royale foote might exist throughout the kingdom, but nothing guaranteed accuracy in manufacturing of these sticks, or that a greedy textile merchant wouldn’t slyly file down the stick in order to charge a customer for more cloth than was actually sold. Also, trading between two societies could be tricky without a common measurement comparison.

As societies developed, so did their definitions and administration of units. Part of government responsibility is to establish and uphold weights and measures to protect trade and business among the free market economy. As the world became smaller, so to speak, and as science and technology developed, the stage was set for the creation a universal units system.

In the late 1700s, scientists in France began pushing for a new units system, and in 1799 the metric system, was adopted in France. The fact that they actually succeeded was itself quite an accomplishment. It took a revolution and the coming into power of Napoleon to cement the change, but it did take hold. However, as we Americans know, it is not easy to convince a stubborn group of people that the units they have been using all their lives and with which they are very comfortable are immediately being replaced by something more “convenient” and systematic. In fact, it is so difficult that the immediate legal and coercive metrication of of America could not be accomplished in the 20th century. Instead, we are now “encouraged” to slowly adopt the metric system until eventually all of us are brainwashed to not know any better. 😉

By the way, the units we use in America are based on the imperial system that was developed in England, though both are slightly different due to their separate development in Europe and the New World.

The International System of Units (or SI Units, as we abbreviate it after the French title) is based on this metric system. It is built on factors of ten and has seven base units: meter (length), kilogram (mass), second (time), ampere (electric current), kelvin (temperature), candela (luminous intensity), and mole (amount of substance). Some of these are more familiar than others, of course (I can’t recall the last time I used the word “candela” in polite conversation). All other units can be formed from these base units (i.e. Hertz = 1/s). Prefixes such as centi-, mega-, and atto- designate smaller or larger quantities of a unit in increments of 1,000.

The Bureau international des poids et mesures, or BIPM, is an international standards organization that supervises the definitions of the base SI units based on input from standards laboratories around the world. People and committees interact through the BIPM to maintain the definitions of these standard units.  It may seem a little silly that one needs to “maintain” definitions of these units, but, as I have mentioned before, quantities like time are becoming increasingly more accurately measured, meaning the time standard (defined by the duration of a very specific atomic transition in hydrogen) can be more and more accurately defined. Also, the definition of the kilogram is currently defined by an actual artifact–a cylinder made of a platinum alloy that is locked up like Fort Knox at the BIPM. But because of fluctuations and movement of the particles in the atoms making up the cylinder (some understood, some a mystery), the actual mass can vary. In fact, the mass has been found to increase by over 50 micrograms over many decades. While that doesn’t seem like too much to us, just like time, there are applications that need very precise definitions for mass. These issues have to be continuously discuss and dealt with.

In light of this problem with defining a standard by an actual artifact, the standards committees are actively pursuing a plan to replace this artifact with a definition based entirely on fundamental, fixed numerical values in nature. This has not yet been implemented, but the work is moving forward. It has been proposed that all units be defined in terms of the second, the most accurately measured quantity in nature, though the means by which this would be accomplished for all six remaining base units, including the kilogram, is not perfectly obvious.

Some of us have our own standard units of measurements, particularly for length. For instance, I know that the distance from the base of my palm to the tip of my index or ring finger is almost exactly six inches. When I am at a store and want to know how long something is and I don’t have a ruler, I can roughly estimate the size using my hand length. Some of us step out yards by our stride or compare our height to a vertical object. Since we are familiar with our own bodies, we can get a really good feel for another object by comparing the two. This is one of the reasons an instant conversion of a society from one unit system to another is so difficult; people spend their whole lives gaining a familiarity with distance, weight, volume, etc., in a particular unit system and have no intuition for the units in another system. I know exactly how far 1,000 miles is (the distance from Denver to my hometown), but I could tell you where I would end up if I drove 1,000 km in the same direction.

The method of comparing the unit of one object (like the length of my hand) to a standard unit (like a ruler) is called calibration. Calibration to a verified source is the way various standards are distributed throughout society with varying degrees of accuracy as required by the situation.

So, that’s your overview of SI units (and units in general). I personally find this subject vastly interesting and incredibly useful. But lest you think all of the standards and measurement work going on at NIST is boring and esoteric, I give you the beer gauge, developed by an employee right here in Boulder!