Wednesday, December 21, 2011

Combinatorial species

Consider a species which assigns to each finite set some other finite set.

For example, assign to the set A, the set of all it's permutations.

Because a species only depends on the size of A, we get a list of integers  which describe the size of the output set. Put these together in an exponential generating function to get   . Addition, multiplication and composition of generating functions give operations on species.

Of course this doesn't take into account automorphisms. Let's say our species was trees that had elements of A as vertices. Then and  would appear different even though they are only off by a permutation of the three element set A. We can identify all of these together and get a isomorphism type generating function

In addition there is the more general function for a species called the cycle index series.


 is the number of cycles of length i in said permutation and  is the number of F structures on the n element set A which has such an automorphism.

You get a whole bunch of identities for all the operations you can perform on species.

To understand this function first look at it for a specific n. For example consider a cube. It has 24 automorphisms acting on 6 faces.

There is the identity which consists of 6 1-cycles. It therefore contributes 
There are 6 quarter turn face rotations which consist of 2 1-cycles and 1 4-cycle. It contributes 
There are 3 half turn face rotations which consist of 2 1-cycles and 2 2-cycles. It contributes 
There are 8 1/3 rotations around the main diagonal. This gives 2 3-cycles. You can see the pattern by now.
There are 6 180 degree rotations around the diagonal connecting the midpoints of opposite edges.

Add these all together and you get the cycle index function associated to this group of automorphisms acting on the set of 6 faces. Putting this with our previous definition of cycle index series you see that the species used here was the one that had   or 0 depending on whether that permutation of 6 elements was or was not a symmetry of the cube. 

Tuesday, December 20, 2011

Seiberg Witten

Consider the equations




Where do these come from? How about the action for a gauge theory. One such action is

What we want to do is understand the space of solutions to these equations. To help do that perturb the second equation by a  term so that the solutions are irreducible i.e.  .

After some index theorem calculations you get that the expected dimension of this moduli space is  which is the index of the deformation complex and then identifying the tangent space with the first homology of said complex.

The moduli space can also be proven to be oriented.

We can now define our invariant for 4 manifolds. If the expected dimension is below zero, we assign that manifold 0 because there are generically no solutions to the Seiberg-Witten equations. If it is zero, then we expect a moduli space consisting of a finite number of points which each have signs. We add them up and get a number.

The physics of this is involved in describing massless magnetic monopoles on the 4-manifold.

http://www.its.caltech.edu/~matilde/swcosi.pdf

Thursday, December 8, 2011

Calogero-Moser

Consider n classical particles on a line that repel each other with a inverse square law. You would not expect this to be completely integrable. But it's this presentation of the phase space that is hiding the many other Hamiltonians available to you. You just don't see them; I don't blame you.

So how should you think of this system?

Start with the cotangent bundle of the space n by n square matrices. Identify it as the space of pairs of n by n matrices (X,Y). Then perform a symplectic reduction  with moment map given by the commutator of this pair which maps to the lie algebra sl(n,C). But instead of the inverse image of 0 which would give you commuting pairs (the commuting scheme), take the inverse image of the set of traceless matrices T such that T+1 has rank 1.

Taking the eigenvalues of X as the positions of the n particles. You will find the off diagonal terms of Y will be determined, but the diagonals won't be. Identify these diagonal entries as the momenta. If you take the function on this space Tr(Y^2) you end up getting the Hamiltonian that we originally said.

But now we see the other Hamiltonians too. They are Tr(Y^k). This gives the rational Calogero-Moser. You can choose another system of Hamiltonians like Tr((XY)^i) for another integrable system. The one just said was the trigonometric one. It is so called because the force between the particles goes like the csc(x) of the displacement.

For more details, see

Quantum Dilogarithm

http://iopscience.iop.org/0305-4470/28/8/014/pdf/0305-4470_28_8_014.pdf

The boring dilogarithm is

with s replaced by 2. Fine, I'll admit its not really boring. Polylogs have a lot going on. They come up in the integrals for Fermi-Dirac and Bose-Einstein distributions. In addition the monodromy group for the dilog is the Heisenberg group.

But we can deform it while still retaining nice identities like the pentagon identity:



We need to use



This function is related to the dilogarithm as



where 

Actually this is true to order epsilon.

Let's let U and V be translation operators in position and momentum space.



They will   commute.

We end up getting


 The second identity is a deformed version of the pentagon identity. We can see that if we try taking the classical  limit in order to do stationary phase approximation.

These kinds of equations were found by Baxter in some 3D integrable systems.

Higgs

Ralph Edezhath tells me that his adviser Luty is pretty confident that this latest rumor of Higgs at 125 GeV is real. We will see on December 13.

Update: We have a gallon challenge bet on this to be settled once more data gets processed.

Tuesday, December 6, 2011

Stochastic vs Quantum

http://math.ucr.edu/home/baez/prob.pdf

A little bit about stochastic vs quantum. Do the same trick of Hermitian generators of unitary transformations but with infinitesimal stochastic operators generating stochastic matrices.

Stochastic matrices list off the probabilities of a transition from state i to state j. Therefore every row has to sum to 1 since state i has to go somewhere. Also the entries are all between 0 and 1 since they are probabilities.

But stochastic matrices aren't as good as unitary matrices, they usually aren't invertible. Think of the stochastic matrix that takes every state to state 1 with probability 1. Clearly you have no information about the original state. For all you know the probabilities were equally distributed in the original configuration.

We can't reverse the configuration, but at least we can propagate forward in time. The example Baez gives is with some number of amoebas that can be born and die with some given rates. Given an infinitesimal stochastic operator giving the probabilities for a bunch of processes like creation of an amoeba or more generally killing k and creating l amoebas, we can create Feynman diagrams which when added up give the full time evolution operator as it usually does.

This can be generalized if you give more vertices that describe more processes between your states. Again draw your Feynman diagrams and compute the time evolution of your initial state.

Tuesday, November 8, 2011

Invariant Theory

Taken from Shamil Shakirov Nov 4 GRASP talk

Consider a path integral, it is invariant under the special linear group acting on the vector space of functions. So based on this goal let us consider the simpler case of integral over a finite n-dimensional space.

We want invariants of polynomials like



where i and j runs from 1 to n. This would be a degree 2 polynomial in n variables. The only invariants for this system are functions of the determinant. This is as it should be because in free field theories the path integral gives the square root of the determinant of the appropriate differential operator (the analog for A in the functional case). This means that the invariant ring which describes all possible polynomial invariants is generated by one element, the determinant. All invariants of the above polynomial will be powers of the determinant.

For interacting field theories we need higher degree polynomials. In these cases the invariant ring is more complicated than just being simply generated by one element and having no relations to quotient out by.

Consider a cubic in two variables (u,v) call it A



The ring of algebraic invariants is generated by the discriminant which is some function of a,b,c,d. Call this
D(A) This means that the integral



must be some function of the discriminant at least formally. Obviously like path integrals it doesn't actually converge. We can pin down what exactly this function is with scaling arguments. If you scale all of the coefficients by some power, then do a substitution (x,y)=C(u,v) to compensate, you find



this implies the function has to be a power law with power -1/6 with some constant in front. This constant is infinite of course, but it has no dependence on the cubic A. We could expect no better from a badly behaved divergent integral.

We continue this process for higher and higher number of variables with cubic and quartic polynomials, and hopefully the dependence of the ring of invariants on n (variables) and r (polynomial degree) gives an idea for how it should behave in an infinite dimensional vector space which we want. It would be nice to explain the answers we get in path integrals more so by invariance properties that have to hold, but I have to admit this sounds very hopeful considering we don't know a lot of the invariant rings for different values of n and r.

Thursday, November 3, 2011

K3 and Moonshining

The moonshining industry links classification results of simple groups and coefficients in expansions such as the ones that you get as partition functions in some field theories.

http://arxiv.org/PS_cache/arxiv/pdf/1004/1004.0956v2.pdf

For Kummer K3 surfaces, Katrin Wendland has constructed a bijection of lattices that provides a way for the Matheiu group to act. On one side there is the lattice of integral homology lattice and on the other there is the Niemeier lattice.

http://de.arxiv.org/PS_cache/arxiv/pdf/1107/1107.3834v1.pdf

The symmetry group acting on these lattices is a subgroup of the Mathieu with order 40320. This gets a little bit closer to getting the entire Mathieu group acting on this nonlinear sigma model with target space Kummer K3.

Friday, October 14, 2011

What's so messed up about d dimensions?

This is going to be a big list so I will build it up gradually.

Dimension 2

  • Orientable, metrizable surfaces can be given a Riemann Surface structure which are conformally equivalent to sphere, plane or hyperbolic plane or quotients thereof.
  • is one of the two possible spheres which admit an almost complex structure
  • Highest dimension in which continuous symmeteries cannot be spontaneously broken.


Dimension 3

  • Geometry of finite volume hyperbolic manifolds of at least this dimension is determined by the fundamental group. The finite isometry group left is given by outer automorphisms of the fundamental group.


Dimension 4

  • Having a h-cobordism between two simply connected smooth 4-manifolds does not give you an isomorphism between them.
  • This is the only dimension in which there are exotic versions of and there are infinitely many to boot.
  • The double cover of the conformal group of 3+1 dimensions is Spin(4,2)=SU(2,2) allowing twistors
  • Can solve Ising model in 3+1 with the usual bosonic variables
  • No Hair theorem in 3+1 dimension
  • You begin to be able to get away with mean field theory in Ising


Dimension 6

  • is one of the two possible spheres which admit an almost complex structure
  • Last Superconformal Algebra


Dimension 8

  • There is a unique even self-dual lattice. It is the one for the   root system.


Dimension 10

  • Conformal anomaly in superstrings cancels


Dimension 11

  • Appears at strong coupling

Wednesday, October 12, 2011

80s for 80s Babies

Check these notes out.

http://math.berkeley.edu/~devans/witten.html

Monday, October 3, 2011

Schramm-Loewner

Consider the following:

I have an Ising model on the upper half plane with a boundary condition that all the spins left of the origin are up and the spins right of the origin are down. Then there is some curve starting from the origin and going into the upper half plane (may or may not come back to the boundary) that describes the domain wall between up and down.




But by Riemann mapping



where D is the upper half plane or Poincare disk take your pick (I'm going to go with disk).

Loewner says we get a differential equation for g.



where the driving function



describes where the domain wall goes when we make it part of the boundary as we unzip with g.

The domain wall really should be random. The right driving force is Brownian motion. So Brownian motion on the circle gives via the Loewner equation a description of the boundary wall in the 2D Ising model. Brownian motion in 1D relates to conformal invariance in 2D. This still works even when the domain wall is not simple. Just take the unbounded component of the complement instead of simply the complement of the curve. We still get a description of the domain wall in terms of brownian motion on the circle.

The parameter of how fast the Brownian motion goes controls the situation in the CFT description of the Ising model.



There are many special values. The case we were talking about seems to be 3. Other values will give other kinds of random walks which correspond to different CFT's

The central charge is determined by this via



If you plug in the Ising case you get 1/2, and all is right with the world.

Another cool duality is that:



One is below 4 and the other above 4. The case of 4 being self dual as it probably should if it is the free field.

Monday, September 5, 2011

Connes Lott

http://math.ucr.edu/home/baez/twf_ascii/week83

This says something amazing. Put two 3+1 spacetimes next to each other and you get electroweak with a Higgs automatically. It is about 10% off from the actual standard model for the Weinberg angle transforming the Z and photon into the SU(2)xU(1) generators. Still it is so close from just noncommutative geometry.

If someone can tell me if this relates to Randall-Sundrum 1, that would also be appreciated.

Dimensions and Categorification

So Witten's Khovanov paper reflects this idea that categorifying in math corresponds to adding dimensions in physics. He goes from a three dimensional theory to a five dimensional theory in order to go from Jones polynomial to Khovanov homology. More brane-y, more dimensions, more categorical. See http://math.ucr.edu/home/baez/diary/fqxi_narrative.pdf

There is a lot going on here (quantum groups, 2-groups, infuriating! weird combinatorics) and I am putting this post up here mainly so I am forced to learn this soon enough to fulfill my promise to tell you about it.

Can anybody explain to me the classical-quantum correspondence in terms of categorification to me?? Is that even possible? It just seemed like very similar arguments. Danke.

Representable Functors

Take your favorite category C. For each object A there is a functor,



from op-C to Set. (This is a contravariant functor and a lower index. NOT the same convention used for contravariant vectors in physics). We could do the other one by switching A and -. That tells you what happens to objects. What happens to morphisms, the only things that you can do, pre and post composition respectively.

In fact the assignment from A to the functor is also functorial. It is a functor from C to the functor category Fun(C-op,Set) where morphisms are natural transformations. I hear you like morphisms so we put some morphisms on morphisms. That is just pretentious abstract nonsense talk for associativity of reading functions.

This functor is the Yoneda embedding, It embeds the category C into the category Fun[C-op,Set]. Why bother translating these trivialities into such foul language?

The reason is that say we have a functor we want to understand like the functor from Rings to Set that takes a ring to the zero set of some polynomial where the variables are taken from that ring.



This functor is isomorphic to one of those above. Namely take A to be



So instead of understanding the equation, understand this functor. Things that seem artificial to do to the equation are more readily explained in the category language. So essentially this language shows you are not pulling all your constructions out of your ass.

We will use this to explain why manifolds are shiny by describing the category of smooth or analytic manifolds.

Monday, August 22, 2011

BPS

Does not actually stand for brane preserving some supersymmetry, but that's what its doing

Remember the Reissner-Nordstrom metric around a charged black hole. It has two horizons at




Of course the case M

and



Doing a change of basis to this gives instead.





Of course the RHS should be positive. So this gives a bound banning M < Z.

To be continued with actual branes preserving supersymmetry.