Compared to biologists and mathematicians, physicists are much more uniform in their worldview. There is a central canon of physics that everyone is taught. With respect to how physicists approach biology especially with regards to how many details to include, their ideas are shaped by the concept of universality and the renormalization group, which I will explain below. This is partly what gives physicists the confidence that complex phenomena can have simple explanations although this physics worldview hegemony is starting to break as physicists become more immersed in biology. In fact, I've even noticed a backlash of some physicists cum biologists towards their colleagues that espouse the notion that details can be dispensed with. Some physicists are very much of the notion that biologically detailed modeling is necessary to make progress. In this sense, what I'm describing might be more appropriately called the old physics world view.
The concept of universality arose from the study of phase transitions and critical phenomena with inspiration from quantum field theory. In a nutshell, it says that for certain systems in regimes where there is no obvious length scale (usually indicated by power law scaling), such as at the critical point of a second order phase transition, the large scale behavior of the system is independent of the microscopic details and only depends on general properties such as the number of dimensions of the space and symmetries in the system. Hence, systems can be classified into what are called universality classes. Although, the theory was developed for critical phenomena in phase transitions, it has since been generalized to apply to a wide range of dynamical situations such as earthquakes, avalanches, flow through porous media, reaction diffusion systems and so forth.
The paradigmatic system for critical phenomena is magnetism. Bulk (ferro)magnetism arises when the atoms (each of which have a small magnetic moment) align and produce macroscopically observable magnetization. However, this only occurs for low temperatures. For high enough temperature, the random motions of the atoms can destroy the alignment and magnetization is lost (material becomes paramagnetic). The change from a state of ferromagnetism to paramagnetism is called a phase transition and occurs at a critical temperature (the Curie temperature).
These systems are understood by considering the energy associated with different states. The probability of occupying a given state is then given by the Boltzmann weight, which is exp(-H(m)/kT), where H(m) is the internal energy of the state with magnetization m (also called the Hamiltonian), T is the temperature, and k is the Boltzmann constant. Given the Boltzmann factor, the partition function (sum of Boltzmann weight over all states) can be constructed from which all quantities of interest can be obtained. Now this particular system was studied over a century ago by notables such as Pierre Curie, who using known microscopic laws of magnetism and mean field theory, found that below a critical temperature Tc, m is nonzero and above Tc m is zero.
However, the modern way of how we think of phase transitions starts with Landau, who first applied it to the onset of superfluidity of helium. Instead of trying to derive the energy from first principles, Landau said let's write out a general form based on the symmetries of an order parameter, which in this example is the magnetization m(x), at spatial location x. Since the energy must be a scalar, it can only depend on terms like |m|^2 or (grad m)^2. The first few terms then obey H ~ \int dx q (grad m)^2 - (T-Tc) m^2 + u m^4 + ..., for parameters q, T, and u. The (grad m)^2 term is due to fluctuations. If fluctuations are ignored, then this is called mean field theory, in which case H ~ -(T -Tc)/2 m^2 + u m^4. The partition function can be estimated by using a saddle point approximtion, which in the mean field limit amounts to evaluting the critical points of H, which are m=0 and m^2=(Tc-T)/4 u. They correspond to the equilibrium states of the system, so if T is greater than Tc then the only solution is m=0 and if T is less than Tc then the magnitude of magnetization is nonzero.
The partition function cannot be explicitly computed in the presence of fluctuations. This is where Ken Wilson and the renormalization group comes in. What Wilson said, following people before him like Murray Gell-Mann, Francis Low and Leo Kadanoff, is suppose we have scale invariance, which is true near a critical point. Then if we integrate out small length scales (or high spatial frequency scales), rescale in x, and then renormalize m, we will end up with a new partition function, but with slightly different parameters. These operations form a group action (i.e dynamical system) on the parameters of the partition function. Thus, a scale invariant system should be at a fixed point of the renormalization group action. In other words, if you keep applying the renormalization group, the parameters can flow to a fixed point and the location of the fixed point only depends on the symmetry of the order parameter and dimension of the space. Many different systems can flow to the same fixed point. The most important element of the renormalization group in terms of the physics worldview is that terms in the Hamiltonian are renormalized in different ways. Some grow, these are called the relevant operators, some stay the same, these are marginal operators, and some decrease, these are called irrelevant operators. For critical systems, only a small number of terms in the Hamiltonian are relevant and this is why microscopic details do not matter at large scales.
Now, these ideas were originally developed just for behavior near a critical point, which is pretty specialized. If it were only applicable to an equilibrium phase transition, then physicists really wouldn't have a leg to stand on in terms of ignoring details. However, these ideas were later generalized to dynamical systems with critical behavior. What also motivates them is that power laws (also called 1/f or fractal scaling) seem to be ubiquitous. They can been found in the size distribution of earthquakes, thermal noise in resistors, size of river meanders, the coastline of Norway, size of hubs in the Internet, connectivity of protein networks, and even neural firing patterns, to name a few. Although there is not an agreement as to why these systems exhibit power laws (many theories have been proposed), the spectre of the renormalization group and universality permeates the air and influences the physicist world view.
My personal view is that some details matter immensely while others do not. However, there is no a priori systematic way of deducing which is which. There are only rules of thumb and experience that can assist us. Hence, even if you buy into the details-may-not-matter worldview, there is no prescription for how to implement it. What it does do is give me less confidence that there is such a thing as the "correct" theory for a system. I'm more inclined to believe that given the current state of knowledge and a specific set of questions, some theories perform better than others. With more information, we can refine our theories. However, I don't think this process ever converges to "the" theory because specifying what a system is is somewhat arbitrary. Nothing is purely isolated from its surroundings, so drawing a boundary is always going to involve a choice. These could be very logical and well informed choices but choices nonetheless. Also, we can never have full control of all the external inputs that can affect a system. In this way, I have a Bayesian viewpoint in that we only make progress by updating our priors.
Saturday, September 13, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment