Tikiwiki AssistantThank you for installing Tikiwiki!
To begin configuring Tiki, please login as the Admin. To learn more, visit: http://www.tikiwiki.org. For help, visit http://doc.tikiwiki.org. 
SourceMatchingA common task in multiwavelength astronomy is to take two catalogs of objects and compare them in order to identify the same objects in each catalog. This can be implemented quite easily, by looping over each object in one catalog, and computing the distance between it and the object in a second catalog. In Cartesian space, the distance is just<latex>D_{ij} = \left( x_i  x_j 2  y_i  y_j 2\right){1/2}</latex> 2 \cos\delta_{\rm ref}2 \right)Here, <latex>x_i, y_i</latex> is the position of the object in the first catalog, and <latex>x_j, y_j</latex> are each of the positions in the second catalog. A match is defined if the distance between two objects is less than the sum of the squares of the uncertainties in the positions of the two objects. The coordinates astronomers work with are of course spherical; typically, <latex>\alpha</latex> and <latex>\delta</latex> representing the right ascension and declination. If you are matching objects that are close together, then you can use the following approximation: <latex>D = \left( \alpha_{\rm ref}  \alpha_i {1/2}</latex> 2</latex> process, because we have to compute the distance between every pair of stars, and then find which offsets were smaller than the positional uncertainties.This process can take a long time, however, if there are a lot of objects to compare. The process above becomes an order <latex>N For a long time, I lived with the slow, order <latex>N 2</latex> process, because my lists of stars were relatively small; a few thousand in each catalog. Making lists of matches wasn't that slow. However, I eventually started using a longer list for one of my catalogs, containing millions of stars. The process was slower, but I could take a little break and come back to the result when the computer was done. 2</latex> matching algorithm, this was barely tolerable for catalogs of a few thousand stars, and excruciating when one of the catalogs had tens or hundreds of thousands of objects (I didn't try millions).Then, I realized another step was needed in the process of making a catalog. I needed to know how likely it was that the matches between objects in the two catalogs were real. I could do a quick calculation of the density of stars in one catalog, and compute the probability that a star would randomly fall within the positional uncertainty of a star in the second catalog. From that probability, I could estimate how many of the matches were spurious, and consequently how many were genuine. However, in my application, the stars were not uniformly distributed; I was looking at the Galactic center, where stars were centrally concentrated, and where differential absorption produced patches with few stars. The only way to approach my problem was through a Monte Carlo simulation. I would add a random, uniform shift to one catalog, and then test how many matches occurred between the bootstrapped distribution and the second catalog. Then, I would repeat the process hundreds of time so I could derive confidence limit. With an order <latex>N I knew there had to be a better matching algorithm (otherwise Gator would be prohibitively slow), but I couldn't find one in the literature. So, I reinvented the wheel. One can produce a faster matching algorithm as follows.
This process will be order N, significantly faster. With this, the Monte Carlo routine to count the number of spurious matches runs at a tolerable speed, even if one of the catalogs contains 10,000 objects, and the other one million objects (although you still might want to take a short break when you run it). The following files implement these sorting and matching algorithms in IDL:

Login 