Wednesday, December 21, 2011

Combinatorial species

Consider a species which assigns to each finite set some other finite set.

For example, assign to the set A, the set of all it's permutations.

Because a species only depends on the size of A, we get a list of integers  which describe the size of the output set. Put these together in an exponential generating function to get   . Addition, multiplication and composition of generating functions give operations on species.

Of course this doesn't take into account automorphisms. Let's say our species was trees that had elements of A as vertices. Then and  would appear different even though they are only off by a permutation of the three element set A. We can identify all of these together and get a isomorphism type generating function

In addition there is the more general function for a species called the cycle index series.


 is the number of cycles of length i in said permutation and  is the number of F structures on the n element set A which has such an automorphism.

You get a whole bunch of identities for all the operations you can perform on species.

To understand this function first look at it for a specific n. For example consider a cube. It has 24 automorphisms acting on 6 faces.

There is the identity which consists of 6 1-cycles. It therefore contributes 
There are 6 quarter turn face rotations which consist of 2 1-cycles and 1 4-cycle. It contributes 
There are 3 half turn face rotations which consist of 2 1-cycles and 2 2-cycles. It contributes 
There are 8 1/3 rotations around the main diagonal. This gives 2 3-cycles. You can see the pattern by now.
There are 6 180 degree rotations around the diagonal connecting the midpoints of opposite edges.

Add these all together and you get the cycle index function associated to this group of automorphisms acting on the set of 6 faces. Putting this with our previous definition of cycle index series you see that the species used here was the one that had   or 0 depending on whether that permutation of 6 elements was or was not a symmetry of the cube. 

Tuesday, December 20, 2011

Seiberg Witten

Consider the equations




Where do these come from? How about the action for a gauge theory. One such action is

What we want to do is understand the space of solutions to these equations. To help do that perturb the second equation by a  term so that the solutions are irreducible i.e.  .

After some index theorem calculations you get that the expected dimension of this moduli space is  which is the index of the deformation complex and then identifying the tangent space with the first homology of said complex.

The moduli space can also be proven to be oriented.

We can now define our invariant for 4 manifolds. If the expected dimension is below zero, we assign that manifold 0 because there are generically no solutions to the Seiberg-Witten equations. If it is zero, then we expect a moduli space consisting of a finite number of points which each have signs. We add them up and get a number.

The physics of this is involved in describing massless magnetic monopoles on the 4-manifold.

http://www.its.caltech.edu/~matilde/swcosi.pdf

Thursday, December 8, 2011

Calogero-Moser

Consider n classical particles on a line that repel each other with a inverse square law. You would not expect this to be completely integrable. But it's this presentation of the phase space that is hiding the many other Hamiltonians available to you. You just don't see them; I don't blame you.

So how should you think of this system?

Start with the cotangent bundle of the space n by n square matrices. Identify it as the space of pairs of n by n matrices (X,Y). Then perform a symplectic reduction  with moment map given by the commutator of this pair which maps to the lie algebra sl(n,C). But instead of the inverse image of 0 which would give you commuting pairs (the commuting scheme), take the inverse image of the set of traceless matrices T such that T+1 has rank 1.

Taking the eigenvalues of X as the positions of the n particles. You will find the off diagonal terms of Y will be determined, but the diagonals won't be. Identify these diagonal entries as the momenta. If you take the function on this space Tr(Y^2) you end up getting the Hamiltonian that we originally said.

But now we see the other Hamiltonians too. They are Tr(Y^k). This gives the rational Calogero-Moser. You can choose another system of Hamiltonians like Tr((XY)^i) for another integrable system. The one just said was the trigonometric one. It is so called because the force between the particles goes like the csc(x) of the displacement.

For more details, see

Quantum Dilogarithm

http://iopscience.iop.org/0305-4470/28/8/014/pdf/0305-4470_28_8_014.pdf

The boring dilogarithm is

with s replaced by 2. Fine, I'll admit its not really boring. Polylogs have a lot going on. They come up in the integrals for Fermi-Dirac and Bose-Einstein distributions. In addition the monodromy group for the dilog is the Heisenberg group.

But we can deform it while still retaining nice identities like the pentagon identity:



We need to use



This function is related to the dilogarithm as



where 

Actually this is true to order epsilon.

Let's let U and V be translation operators in position and momentum space.



They will   commute.

We end up getting


 The second identity is a deformed version of the pentagon identity. We can see that if we try taking the classical  limit in order to do stationary phase approximation.

These kinds of equations were found by Baxter in some 3D integrable systems.

Higgs

Ralph Edezhath tells me that his adviser Luty is pretty confident that this latest rumor of Higgs at 125 GeV is real. We will see on December 13.

Update: We have a gallon challenge bet on this to be settled once more data gets processed.

Tuesday, December 6, 2011

Stochastic vs Quantum

http://math.ucr.edu/home/baez/prob.pdf

A little bit about stochastic vs quantum. Do the same trick of Hermitian generators of unitary transformations but with infinitesimal stochastic operators generating stochastic matrices.

Stochastic matrices list off the probabilities of a transition from state i to state j. Therefore every row has to sum to 1 since state i has to go somewhere. Also the entries are all between 0 and 1 since they are probabilities.

But stochastic matrices aren't as good as unitary matrices, they usually aren't invertible. Think of the stochastic matrix that takes every state to state 1 with probability 1. Clearly you have no information about the original state. For all you know the probabilities were equally distributed in the original configuration.

We can't reverse the configuration, but at least we can propagate forward in time. The example Baez gives is with some number of amoebas that can be born and die with some given rates. Given an infinitesimal stochastic operator giving the probabilities for a bunch of processes like creation of an amoeba or more generally killing k and creating l amoebas, we can create Feynman diagrams which when added up give the full time evolution operator as it usually does.

This can be generalized if you give more vertices that describe more processes between your states. Again draw your Feynman diagrams and compute the time evolution of your initial state.