Coder Social home page Coder Social logo

gaia-comoving-stars's People

Contributors

adrn avatar davidwhogg avatar smoh avatar timothydmorton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

Forkers

johnnygreco

gaia-comoving-stars's Issues

apply Lutz-Kelker correction

I think we should be using this when we make our dumb plots (ie, our plots that just use 1/parallax as distance).

Solar stalkers

Tremaine asked me again today if there are any stars in the catalog consistent with being co-moving with the Sun. This is just a reminder for either of us to visualize / look at stars that have proper motions consistent with zero.

Binary star fits done--- better matched control single sample needed

Here is a preview summary of the population of fitted binaries (only 1290, not 1842, so some of them failed; will need to check why).

image

And here is an attempt at a control sample (1842 stars randomly chosen from TGAS):

image

A lot of the singles seem to want to be ~10^5 years old; not sure why. For this reason, I think it would be good @smoh if you sent me a matched list of stars following similar cuts (e.g., parallax SNR) as the binary sample, but just ones that don't have co-moving companions. I think that would make a better control sample.

But apart from this, there does seem to be an excess of relatively young stars (~10^7 years) in the binary sample. But there is also weird structure in the age distribution, so I'm not sure what all is going on. Also remember that these are just scatter plots of median posterior values---I'm happy to take any suggestions for better visualizations as well.

systematic uncertainties in parallax?

How are we dealing with the systematic uncertainty in parallax (e.g. as documented here and more vaguely in the DR1 documentation)? It looks like for now (from TGASStar.get_cov) we're taking the covariance matrix as is from the table. Perhaps this is OK since we are just interested in relative parallaxes for the purposes of identifying close pairs, but I just realized that I have also been fitting my starmodels without inflating the parallax uncertainty. Any suggestions on the best way to correct this? The Stassun & Torres paper suggest a systematic offset of -0.25 +/- 0.05 mas.

Include stellar parameter info in Paper 1

Do we include stellar parameter info inferred from photometry in Paper 1 about the candidate sample? My gut says "yes" since it seems like it is fairly easy for @timothydmorton to run on a sample with the numbers we expect, but it could also move to the population inference paper. Any strong opinions?

TGAS coordinate matching

@adrn and I confirmed that the positions in the TGAS catalog have epoch of 2015. By my understanding, this means that in order to compare coordinates to, e.g., a catalog with J2000 coordinates, I will have to adjust both ra and dec by 15*pm mas.

The TGAS also has a tycho2_id column. So, bypassing any step of catalog querying, I can look up positions in the Tycho-2 catalog by ID, and compare those with the proper-motion-rewinded TGAS coordinates, and they should match up.

When I do this, and calculate the separation angle between the TGAS ra/dec and the Tycho-2 ra/dec for the matched sources, this is what they look like:

image

So this is not bad, but I would feel better if it were better. I will proceed from here, but I just wanted to note this in case anyone had thoughts.

Interestingly enough, the Tycho-2 catalog (as obtained from Vizier) has both (RAJ2000,DEJ2000) columns and (RA_ICRS, DE_ICRS) columns. The following is the separation distribution when I compare evolved TGAS coords to these ICRS-labeled columns:

image

Data needed for fitting stellar models

When we have candidate binary pairs, the most convenient way for me to receive them would be just a set of (i,j) indices, (with i < j to avoid duplicates), where the indices refer to row numbers from the entire TGAS table. I'm preparing to receive such a list so that I can quickly turn around the StarModel fits.

I will also need a similarly sized list of apparently single stars, for me to fit as a control sample, to see if there are systematic differences in age between these two samples.

Submit the paper!

  • submit to AAS journals, preferred journal = ApJ (you can't submit directly to ApJ anymore), put Spergel under "who is handling page charges" 💰💰💰
  • make this repository public
  • submit to arxiv, link to web visualization in comments (if it's ready)
  • lunch is on me Hogg! (by which he probably means Jim Simons)

designate row/col to delete without dependence on their values

There’s a bug in line 58-60 in likelihood.py — I think it is better to set idx by hand as it is third row for every star for RV (If I’m reading things right).
When d is very large, because there is /=d**2 in line 49 of get_Cinv, Cinv_dirty can be very small, which will get deleted, and Cinv is now empty matrix.
This happens for star 2 and star 299524 in stacked_tgas.

/Users/semyeong/projects/gaia-wide-binaries/gwb/likelihood.pyc in get_Ainv_nu_Delta(d, M_dirty, Cinv_dirty, y_dirty, Vinv)
    59     Cinv = np.delete(Cinv_dirty, idx, axis=0)
    60     Cinv = np.delete(Cinv, idx, axis=1)
---> 61     _,log_detCinv = np.linalg.slogdet(Cinv/(2*np.pi))
    62 
    63     M = np.delete(M_dirty, idx, axis=0)



/Users/semyeong/anaconda2/lib/python2.7/site-packages/numpy/linalg/linalg.pyc in slogdet(a)
  1704
    """

  1705     a = asarray(a)
-> 1706     _assertNoEmpty2d(a)
  1707     _assertRankAtLeast2(a)
  1708     _assertNdSquareness(a)



/Users/semyeong/anaconda2/lib/python2.7/site-packages/numpy/linalg/linalg.pyc in _assertNoEmpty2d(*arrays)
   220     for a in arrays:
   221         if a.size == 0 and product(a.shape[-2:]) == 0:
--> 222             raise LinAlgError("Arrays cannot be empty")
   223 
   224 



LinAlgError: Arrays cannot be empty

tracebacks:

ipdb> �d

/Users/semyeong/projects/gaia-wide-binaries/gwb/likelihood.py(87)_marg_likelihood_helper()
    85     Cinv = get_Cinv(ds, data)
    86 
---> 87     Ainv,nu,Delta = get_Ainv_nu_Delta(ds, M, Cinv, y, Vinv)
    88     sgn,log_detAinv = np.linalg.slogdet(Ainv/(2*np.pi))
    89     log_detA = -log_detAinv


��ipdb> ��ds
6685.9553296605964
��ipdb> ��d

/Users/semyeong/projects/gaia-wide-binaries/gwb/likelihood.py(61)get_Ainv_nu_Delta()
    59     Cinv = np.delete(Cinv_dirty, idx, axis=0)
    60     Cinv = np.delete(Cinv, idx, axis=1)
---> 61     _,log_detCinv = np.linalg.slogdet(Cinv/(2*np.pi))
    62 
    63     M = np.delete(M_dirty, idx, axis=0)


��ipdb> ��Cinv_dirty
array([[  7.55760737e-09,  -2.53065913e-09,   0.00000000e+00],
      [ -2.53065913e-09,   9.37824516e-09,   0.00000000e+00],
      [  0.00000000e+00,   0.00000000e+00,   0.00000000e+00]])

let there be only one frame of indicies?

@adrn Right now, you cutoff stacked tgas catalog with a S/N cut in line 39 of generate-par-sample.py, and indicies generated from this file is for that particular S/N cut applied tgas, and we need to give compute-likelihood-ratio.py that same S/N. I think this is quite prone to mistakes... and it's better to only refer to indicies in the entire tgas catalog. This may be an issue when we pass info to @timothydmorton also. Let me know what you think.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.