{ "cells": [ { "cell_type": "markdown", "id": "3a4c8694", "metadata": {}, "source": [ "# Query PS1 catalog database for moving targets\n", "\n", "This notebook is available for download.\n", "In the future it will be incorporated into the [MAST notebooks repository](https://spacetelescope.github.io/mast_notebooks/intro.html).\n", "\n", "This searches for Pan-STARRS1 catalog detections of the asteroid \n", "[**174361 Rickwhite (2002 TV315)**](https://ssd.jpl.nasa.gov/tools/sbdb_lookup.html#/?sstr=rickwhite&view=OPD). The [CADC solar system image search](https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/ssois/index.html#name) API is used to get an initial list of observations. Then the [JPL Horizons database](https://ssd.jpl.nasa.gov/horizons/) is queried (using the `astroquery.jplhorizons` module) to get the position of the target as seen from the PS1 observatory on Mt Haleakala at the exact times of the PS1 observations. Those positions are passed to the \n", "[PS1 CasJobs database](https://mastweb.stsci.edu/ps1casjobs) where they are cross-matched with the PS1 detection catalog.\n", "\n", "For asteroid `rickwhite`, there are 93 potential observations identified. Image cutouts are extracted from all those images using the [ps1images](https://ps1images.stsci.edu) image cutout service. 30 images are blank because the object position falls in a chip gap or off the edge of the camera. The 63 remaining images are displayed and show that the positions are indeed correct. Some of the non-blank images still have gaps or other artifacts at the object position.\n", "\n", "Using CasJobs, the PS1 catalog positions for the good images are queried and all matching detections from those images are returned. 46 matches are found for the 63 images. \n", "\n", "The time for all queries is about 1 minute (including the time to extract the image cutouts and to query the MAST database). The time and number of results returned should be roughly similar for most other solar system objects in the PS1 observing area.\n", "\n", "Dependencies:\n", "\n", "* `mastcasjobs` module to query MAST CasJobs database from Python\n", "* `astroquery.jplhorizons` to access JPL Horizons ephemerides for objects\n", "* `Pillow` (PIL image module) to read JPEG images\n", "* Custom code is used to access the [CADC SSOS service](https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/ssois/index.html#name)\n", "\n", "*R. White, 2025 February 3*" ] }, { "cell_type": "markdown", "id": "323fdd65", "metadata": {}, "source": [ "## Imports and configuration" ] }, { "cell_type": "code", "execution_count": null, "id": "ce3e8464", "metadata": {}, "outputs": [], "source": [ "import astropy\n", "from astropy.table import Table, join, vstack\n", "from astropy.time import Time\n", "from astropy.io import fits\n", "\n", "from astroquery.jplhorizons import Horizons\n", "import mastcasjobs\n", "from PIL import Image, UnidentifiedImageError\n", "\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from io import BytesIO, StringIO\n", "import time\n", "import requests\n", "import sys\n", "import os\n", "import getpass\n", "\n", "saveplots = False # set to save some of the plots to png files" ] }, { "cell_type": "code", "execution_count": null, "id": "ced736a6", "metadata": {}, "outputs": [], "source": [ "# code to customize display\n", "from IPython.display import display, Markdown, Latex, HTML\n", "\n", "# Set page width to fill browser for longer output lines\n", "# and make tables left-aligned\n", "display(HTML(\"\"\"\n", "\n", "\"\"\"))\n", "# set width for pprint\n", "astropy.conf.max_width = 150" ] }, { "cell_type": "markdown", "id": "28da8a57", "metadata": {}, "source": [ "## Get and check CasJobs user login information\n", "\n", "This uses the `CASJOBS_USERID` and `CASJOBS_PW` environment variables if they are defined. If prompts for them if the environment variables are not defined.\n", "\n", "If you do not have a CasJobs account, create one using the [MAST CasJobs website](https://mastweb.stsci.edu/ps1casjobs/)." ] }, { "cell_type": "code", "execution_count": null, "id": "305d4882", "metadata": {}, "outputs": [], "source": [ "if not os.environ.get('CASJOBS_USERID'):\n", " os.environ['CASJOBS_USERID'] = input('Enter Casjobs UserID:')\n", "if not os.environ.get('CASJOBS_PW'):\n", " os.environ['CASJOBS_PW'] = getpass.getpass('Enter Casjobs password:')\n", "\n", "jobs = mastcasjobs.MastCasJobs(context=\"MyDB\")\n", "print(\"Confirmed MAST CasJobs account access\")" ] }, { "cell_type": "markdown", "id": "bc87dbf3", "metadata": {}, "source": [ "## Function to search moving object path using CADC\n", "\n", "See the [CADC moving target API documentation](https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/ssois/apidoc.html) for more details about the moving target search. " ] }, { "cell_type": "code", "execution_count": null, "id": "4dc2343f", "metadata": {}, "outputs": [], "source": [ "def cadc_ssos_query(object_name, search=\"bynameCADC\", epoch1=54985, epoch2=57079,\n", " xyres=\"no\", telinst=\"Pan-STARRS1\", lang=\"en\", format=\"tsv\",\n", " url=\"https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/cadcbin/ssos/ssosclf.pl\"):\n", " \"\"\"Use CADC moving object query to find PS1 observations by target name\n", " \n", " The epoch parameters give the range in MJD for the PS1 observations.\n", " The only parameter that might be usefully modified is the search. \n", " E.g., use search=\"bynameHorizons\" to search using the JPL Horizons emphemeris rather\n", " than the ephemeris cached at CADC. That might be useful if the CADC cache has become\n", " out-of-date.\n", " \n", " This return an astropy table with the observations.\n", " \"\"\"\n", " r = requests.get(url, params=dict(lang=lang, object=object_name, search=search,\n", " epoch1=epoch1, epoch2=epoch2, xyres=xyres,\n", " telinst=telinst, format=format))\n", " r.raise_for_status()\n", " # raise exception for object not found\n", " msg = r.text.split('\\n',1)[0]\n", " if msg.lower().find(\"not found\") >= 0:\n", " if msg.lower().endswith(\"
\"):\n", " msg = msg[:-5]\n", " raise ValueError(f\"Error from CADC search: {msg}\")\n", " tab = Table.read(r.text, format=\"ascii.csv\", delimiter=\"\\t\",\n", " converters=dict(Image_target=str))\n", " return tab" ] }, { "cell_type": "markdown", "id": "65224387", "metadata": {}, "source": [ "## Functions to get images from PS1 cutout server" ] }, { "cell_type": "code", "execution_count": null, "id": "7541f579", "metadata": {}, "outputs": [], "source": [ "def getimages(tab, format=\"jpg\", verbose=False, **kw):\n", " \n", " \"\"\"Return list of images specified by the table with usual columns\n", " (projcell, subcell, image, ra and dec columns)\n", " \n", " Parameters:\n", " format = data format (options are \"jpg\", \"png\", \"fits\")\n", " verbose = if true, print progress info\n", " Other parameters passed on to tab2cutout:\n", " size = extracted image size in pixels (0.25 arcsec/pixel)\n", " output_size = output (display) image size in pixels (default = size).\n", " output_size has no effect for fits format images.\n", " racol, deccol, imagecol = names of table columns\n", " Returns the image as a PIL Image or an astropy fits hdulist\n", " \"\"\"\n", " \n", " if format not in (\"jpg\", \"png\", \"fits\"):\n", " raise ValueError(\"format must be jpg, png or fits\")\n", " urls = tab2cutout(tab, format=format, verbose=verbose, **kw)\n", " images = []\n", " if verbose:\n", " t0 = time.time()\n", " print(f\"{time.time()-t0:.1f} s: completed 0 of {len(urls)} images\", end=\"\")\n", " for i, url in enumerate(urls,start=1):\n", " r = requests.get(url)\n", " if format == \"fits\":\n", " images.append(fits.open(BytesIO(r.content)))\n", " else:\n", " try:\n", " images.append(Image.open(BytesIO(r.content)))\n", " except UnidentifiedImageError as e:\n", " print(f\"\\nImage not found {url}\")\n", " images.append(None)\n", " if verbose:\n", " print(f\"\\r{time.time()-t0:.1f} s: completed {i} of {len(urls)} images\", end=\"\")\n", " if verbose:\n", " print()\n", " return images\n", "\n", "\n", "def tab2path(tab, imagecol='Image', verbose=False):\n", " \"\"\"Convert table with usual columns (projcell, subcell, filter, expStart) to warp filter path\n", " \n", " Convert expStart from TAI to UTC for the file label\n", " \n", " Returns a list of file paths\n", " \"\"\"\n", " for c in ('projcell','subcell',imagecol):\n", " assert c in tab.colnames, f\"'{c}' column not found in table\"\n", " rv = []\n", " for i, row in enumerate(tab):\n", " projcell = f\"{row['projcell']:04d}\"\n", " subcell = f\"{row['subcell']:03d}\"\n", " image = row[imagecol]\n", " rv.append(f\"/rings.v3.skycell/{projcell}/{subcell}/{image}.fits\")\n", " return rv\n", "\n", "\n", "def tab2cutout(tab, size=64, output_size=256, format=\"jpg\", autoscale=99.5, asinh=True,\n", " racol=\"ra\", deccol=\"dec\", imagecol=\"Image\", verbose=False,\n", " urlbase=\"https://ps1images.stsci.edu/cgi-bin/fitscut.cgi\"):\n", " \"\"\"Convert table with usual columns (projcell, subcell, image, ra and dec columns) \n", " to URL for warp image cutout\n", " \n", " Returns a list of URLs\n", " \"\"\"\n", " for c in ('projcell', 'subcell', imagecol, racol, deccol):\n", " assert c in tab.colnames, f\"'{c}' column not found in table\"\n", " paths = tab2path(tab, imagecol=imagecol, verbose=verbose)\n", " rv = []\n", " for row, path in zip(tab, paths):\n", " params = dict(red=path, ra=row[racol], dec=row[deccol], size=size, format=format)\n", " if filter != \"fits\":\n", " # these parameters are used only for non-fits formats\n", " params['autoscale'] = autoscale\n", " if output_size is not None:\n", " params['output_size'] = output_size\n", " if asinh:\n", " params['asinh'] = 1\n", " sparams = '&'.join([f\"{key}={str(value)}\" for key, value in params.items()])\n", " rv.append(f\"{urlbase}?{sparams}\")\n", " return rv" ] }, { "cell_type": "markdown", "id": "fbe8f5bd", "metadata": {}, "source": [ "## Functions to locate PanSTARRS skycell for a given RA/Dec" ] }, { "cell_type": "code", "execution_count": null, "id": "377a4805", "metadata": {}, "outputs": [], "source": [ "# pixel scale is 0.25 arcsec\n", "pixscale = 0.25\n", "\n", "# table of rings info\n", "rings = Table.read(\"\"\"zone projcell nband dec dec_min dec_max xcell ycell crpix1 crpix2\n", "13 487 72 -38.0 -39.98569922229713 -35.98748871852416 6305 6279 239.5 238.0\n", "14 559 76 -34.0 -35.98748871852416 -31.988947843740302 6265 6274 237.5 237.5\n", "15 635 79 -30.0 -31.988947843740302 -27.990585819085737 6274 6269 239.5 241.5\n", "16 714 82 -26.0 -27.990585819085737 -23.991856317278298 6255 6265 241.5 240.5\n", "17 796 84 -22.0 -23.991856317278298 -19.99333106003115 6279 6261 241.0 242.0\n", "18 880 86 -18.0 -19.99333106003115 -15.994464100667857 6274 6258 241.5 238.0\n", "19 966 88 -14.0 -15.994464100667857 -11.995671545879146 6242 6254 240.5 239.5\n", "20 1054 89 -10.0 -11.995671545879146 -7.997014822849273 6248 6250 240.0 242.0\n", "21 1143 89 -6.0 -7.997014822849273 -3.998086931795508 6291 6247 237.5 240.0\n", "22 1232 90 -2.0 -3.998086931795508 1.3877787807814457e-17 6240 6243 240.0 242.0\n", "23 1322 90 2.0 1.3877787807814457e-17 3.998086931795508 6240 6243 240.0 242.0\n", "24 1412 89 6.0 3.998086931795508 7.997014822849273 6291 6247 237.5 240.0\n", "25 1501 89 10.0 7.997014822849273 11.995671545879146 6248 6250 240.0 242.0\n", "26 1590 88 14.0 11.995671545879146 15.994464100667857 6242 6254 240.5 239.5\n", "27 1678 86 18.0 15.994464100667857 19.99333106003115 6274 6258 241.5 238.0\n", "28 1764 84 22.0 19.99333106003115 23.991856317278298 6279 6261 241.0 242.0\n", "29 1848 82 26.0 23.991856317278298 27.990585819085737 6255 6265 241.5 240.5\n", "30 1930 79 30.0 27.990585819085737 31.988947843740302 6274 6269 239.5 241.5\n", "31 2009 76 34.0 31.988947843740302 35.98748871852416 6265 6274 237.5 237.5\n", "32 2085 72 38.0 35.98748871852416 39.98569922229713 6305 6279 239.5 238.0\n", "33 2157 68 42.0 39.98569922229713 43.98384988850235 6320 6284 239.5 240.0\n", "34 2225 64 46.0 43.98384988850235 47.98179099058205 6307 6289 238.0 242.0\n", "35 2289 60 50.0 47.98179099058205 51.97972846465295 6261 6295 241.0 239.5\n", "36 2349 55 54.0 51.97972846465295 55.97676677254534 6283 6302 239.0 242.0\n", "37 2404 50 58.0 55.97676677254534 59.97411834356914 6278 6311 238.5 237.5\n", "38 2454 45 62.0 59.97411834356914 63.97005521318337 6240 6319 240.0 241.0\n", "39 2499 39 66.0 63.97005521318337 67.96475284198297 6307 6333 239.5 240.0\n", "40 2538 33 70.0 67.96475284198297 71.95817764373305 6365 6350 238.5 240.0\n", "41 2571 27 74.0 71.95817764373305 75.94974140116577 6413 6371 240.5 242.0\n", "42 2598 21 78.0 75.94974140116577 79.93968190571101 6452 6398 240.0 242.0\n", "43 2619 15 82.0 79.93968190571101 83.93610604798963 6481 6429 241.0 242.0\n", "44 2634 9 86.0 83.93610604798963 87.96940975734594 6501 6419 239.0 239.5\n", "45 2643 1 90.0 87.96940975734594 90.0001 6240 6240 239.5 240.0\n", "\"\"\", format=\"ascii.csv\", delimiter=\" \")\n", "\n", "dec_limit = rings['dec_min'].min()\n", "\n", "def findskycell(ra, dec):\n", "\n", " \"\"\"Given input arrays RA, DEC (degrees), returns an astropy table with the skycell info\n", "\n", " RA and Dec can be scalars or arrays.\n", " This returns just the best sky cell for each ra/dec (the one where the position is closest to the center).\n", " This uses the rings table for info on the tessellation.\n", " The returned table has these columns:\n", " Column Value\n", " ra Input position\n", " dec \n", " index 0-based index into original array\n", " projcell Projection cell (0 if outside coverage)\n", " subcell Subcell (0..99)\n", " crval1 Sky position of reference pixel for projection cell\n", " crval2 \n", " crpix1 Reference pixel in this skycell\n", " crpix2 \n", " x Source pixel position in this skycell\n", " y \n", " iring Index into rings array\n", " \"\"\"\n", "\n", " if np.isscalar(ra) and np.isscalar(dec):\n", " return _findskycell_array(np.array([ra]),np.array([dec]))\n", " if len(ra) == len(dec):\n", " return _findskycell_array(np.asarray(ra),np.asarray(dec))\n", " else:\n", " raise ValueError(\"ra and dec must both be scalars or be matching length arrays\")\n", "\n", "\n", "def _findskycell_array(ra, dec):\n", "\n", " \"\"\"Internal function: ra and dec are known to be arrays\"\"\"\n", "\n", " if ra.ndim != 1 or dec.ndim != 1:\n", " raise ValueError(\"ra and dec must be 1-D arrays\")\n", " index = np.arange(len(ra),dtype=int)\n", " # find dec zone where rings.dec_min <= dec < rings.dec_max\n", " idec = np.searchsorted(rings['dec_max'], dec)\n", "\n", " # special handling at pole where overlap is complicated\n", " # do extra checks for top 2 rings\n", " # always start with the ring just below the pole\n", " nearpole = np.where(idec >= len(rings)-2)\n", " idec[nearpole] = len(rings)-2\n", "\n", " nband = rings['nband'][idec]\n", " # get normalized RA in range 0..360\n", " nra = ra % 360.0\n", " ira = np.rint(nra*nband/360.0).astype(int) % nband\n", "\n", " projcell = rings['projcell'][idec] + ira\n", " dec_cen = rings['dec'][idec]\n", " ra_cen = ira*360.0/nband\n", "\n", " # locate subcell(s) within the projection cell\n", "\n", " # use tangent project to get pixel offsets\n", " x, y = sky2xy_tan(nra, dec, ra_cen, dec_cen)\n", "\n", " pad = 480\n", "\n", " if len(nearpole[0]) > 0:\n", " # handle the points near the pole (if any)\n", " # we know that this \"ring\" has only a single field\n", " projcell2 = rings['projcell'][-1]\n", " dec_cen2 = rings['dec'][-1]\n", " ra_cen2 = 0.0\n", " x2, y2 = sky2xy_tan(nra[nearpole], dec[nearpole], ra_cen2, dec_cen2)\n", " # compare x,y and x2,y2 to image sizes to select best image\n", " # returns a Boolean array with true for values where 2nd image is better\n", " use2 = poleselect(x[nearpole], y[nearpole], x2, y2, rings[-2], rings[-1], pad)\n", " if use2.any():\n", " # slightly obscure syntax here makes this work even if ra, dec are multi-dimensional\n", " wuse2 = np.where(use2)[0]\n", " w2 = tuple(xx[wuse2] for xx in nearpole)\n", " idec[w2] = len(rings)-1\n", " nband[w2] = 1\n", " ira[w2] = 0\n", " projcell[w2] = projcell2\n", " dec_cen[w2] = dec_cen2\n", " ra_cen[w2] = ra_cen2\n", " x[w2] = x2[wuse2]\n", " y[w2] = y2[wuse2]\n", "\n", " # compute the subcell from the pixel location\n", " px = rings['xcell'][idec]-pad\n", " py = rings['ycell'][idec]-pad\n", " k = np.rint(4.5+x/px).astype(int).clip(0,9)\n", " j = np.rint(4.5+y/py).astype(int).clip(0,9)\n", " subcell = 10*j + k\n", "\n", " # get pixel coordinates within the skycell image\n", " crpix1 = rings['crpix1'][idec] + px*(5-k)\n", " crpix2 = rings['crpix2'][idec] + py*(5-j)\n", " ximage = x + crpix1\n", " yimage = y + crpix2\n", " iring = idec\n", "\n", " # insert zeros where we are below lowest dec_min\n", " w = np.where(dec < dec_limit)\n", " projcell[w] = 0\n", " subcell[w] = 0\n", " crpix1[w] = 0\n", " crpix2[w] = 0\n", " ximage[w] = 0\n", " yimage[w] = 0\n", "\n", " # return an astropy Table\n", " return Table([ra,dec,index,projcell,subcell,ra_cen,dec_cen,crpix1,crpix2,ximage,yimage,iring],\n", " names=\"ra,dec,index,projcell,subcell,crval1,crval2,crpix1,crpix2,x,y,iring\".split(\",\"))\n", "\n", "\n", "def poleselect(x1, y1, x2, y2, rings1, rings2, pad):\n", "\n", " \"\"\"Compares x,y values from 2 images to determine which is best\n", "\n", " Returns boolean array with True where x2,y2 is best\n", " \"\"\"\n", "\n", " nx1 = 10*(rings1['xcell']-pad)+pad\n", " ny1 = 10*(rings1['ycell']-pad)+pad\n", " nx2 = 10*(rings2['xcell']-pad)+pad\n", " ny2 = 10*(rings2['ycell']-pad)+pad\n", " # compute minimum distances to image edges\n", " # note negative values are off edge\n", " d1 = np.minimum(np.minimum(x1+nx1//2,nx1//2-1-x1), np.minimum(y1+ny1//2,ny1//2-1-y1))\n", " d2 = np.minimum(np.minimum(x2+nx2//2,nx2//2-1-x2), np.minimum(y2+ny2//2,ny2//2-1-y2))\n", " return (d1 < d2)\n", "\n", "\n", "def sky2xy_tan(ra, dec, ra_cen, dec_cen, crpix=(0.0,0.0)):\n", "\n", " \"\"\"Convert RA,Dec sky position (degrees) to X,Y pixel position\n", "\n", " ra[n], dec[n] are input arrays in degrees\n", " ra_cen[n], dec_cen[n] are image centers in degrees\n", " crpix is the reference pixel position (x,y)\n", " Returns tuple (x,y) where x and y are arrays with pixel position for each RA,Dec\n", " \"\"\"\n", "\n", " dtor = np.pi/180\n", " cd00 = -pixscale*dtor/3600\n", " cd01 = 0.0\n", " cd10 = 0.0\n", " cd11 = -cd00\n", " determ = cd00*cd11-cd01*cd10\n", " cdinv00 = cd11/determ\n", " cdinv01 = -cd01/determ\n", " cdinv10 = -cd10/determ\n", " cdinv11 = cd00/determ\n", "\n", " cos_crval1 = np.cos(dtor*dec_cen)\n", " sin_crval1 = np.sin(dtor*dec_cen)\n", "\n", " radif = (ra - ra_cen)*dtor\n", " w = np.where(radif > np.pi)\n", " radif[w] -= 2*np.pi\n", " w = np.where(radif < -np.pi)\n", " radif[w] += 2*np.pi\n", "\n", " decrad = dec*dtor\n", " cos_dec = np.cos(decrad)\n", " sin_dec = np.sin(decrad)\n", " cos_radif = np.cos(radif)\n", " sin_radif = np.sin(radif)\n", " h = sin_dec*sin_crval1 + cos_dec*cos_crval1*cos_radif\n", " xsi = cos_dec*sin_radif/h\n", " eta = (sin_dec*cos_crval1 - cos_dec*sin_crval1*cos_radif)/h\n", " xdif = cdinv00*xsi + cdinv01*eta\n", " ydif = cdinv10*xsi + cdinv11*eta\n", " return (xdif+crpix[0], ydif+crpix[1])\n", "\n", "\n", "def xy2sky_tan(x, y, ra_cen, dec_cen, crpix=(0.0,0.0)):\n", "\n", " \"\"\"Convert X,Y pixel position to RA,Dec sky position (degrees)\n", "\n", " x[n], y[n] are input arrays in pixels\n", " ra_cen[n], dec_cen[n] are image centers in degrees\n", " crpix is the reference pixel position (x,y)\n", " Returns tuple (ra,dec) where ra and dec are arrays with pixel position for each x,y\n", " \"\"\"\n", "\n", " dtor = np.pi/180\n", " cd00 = -pixscale*dtor/3600\n", " cd01 = 0.0\n", " cd10 = 0.0\n", " cd11 = -cd00\n", "\n", " cos_crval1 = np.cos(dtor*dec_cen)\n", " sin_crval1 = np.sin(dtor*dec_cen)\n", "\n", " xdif = x - crpix[0]\n", " ydif = y - crpix[1]\n", " xsi = cd00*xdif + cd01*ydif\n", " eta = cd10*xdif + cd11*ydif\n", " beta = cos_crval1 - eta*sin_crval1\n", " ra = np.arctan2(xsi, beta) + dtor*ra_cen\n", " gamma = np.sqrt(xsi**2 + beta**2)\n", " dec = np.arctan2(eta*cos_crval1+sin_crval1, gamma)\n", " return (ra/dtor, dec/dtor)" ] }, { "cell_type": "markdown", "id": "500abb96", "metadata": {}, "source": [ "## Get image list from CADC moving object query \n", "\n", "This query takes about 10 seconds." ] }, { "cell_type": "code", "execution_count": null, "id": "2c586f83", "metadata": {}, "outputs": [], "source": [ "%%time \n", "\n", "source = \"rickwhite\"\n", "\n", "# some other sample objects\n", "# source = \"33798\" # fast-moving object, 10 arcsec/hour\n", "# source = \"134340\" # Pluto\n", "# source = \"2003 YT1\" # asteroid that passes close to North Celestial Pole (only 2 epochs in PS1)\n", "# source = \"Cruithne\" # fast-moving, unusual Earth-associated orbit\n", "# source = \"5261\" # Mars-trojan asteroid\n", "# source = \"243\" # 243 Ida, first asteroid known to have a moon\n", "# source = \"2024 YR4\"\n", "\n", "cadc_tab = cadc_ssos_query(source)\n", "print(f\"Returned table has {len(cadc_tab)} rows\")\n", "cadc_tab[:8]" ] }, { "cell_type": "markdown", "id": "628954ac", "metadata": {}, "source": [ "## Recompute positions using accurate epochs\n", "\n", "The interpolated positions from CADC can be off by a few arcsec, so recompute them using JPL Horizons at the exact times of the PS1 observations.\n", "\n", "The location code `F51` corresponds to the \n", "[Pan-STARRS 1 observatory on Mt. Haleakala](https://en.wikipedia.org/wiki/List_of_observatory_codes). The object positions are converted to RA and Dec as seen from the observatory.\n", "\n", "Note that Horizons uses times in UTC rather than TAI (which is what is in the PS1 database). Appending `TT` to the start time is supposed to cause Horizons to use TT (= TAI $\\pm0.002$ s), but that triggers some kind of parsing bug in the `astroquery.Horizons` module. Do the queries in UTC instead.\n", "\n", "Divide this query into groups of 50 JD values to avoid errors from the service.\n", "\n", "This query takes about 1 second." ] }, { "cell_type": "code", "execution_count": null, "id": "c6c2eb15", "metadata": {}, "outputs": [], "source": [ "# convert input epochs from TAI to UTC\n", "jdepochs = Time(cadc_tab[\"MJD\"],scale=\"tai\",format=\"mjd\").utc.jd.tolist()\n", "tlist = []\n", "block = 50\n", "for k in range(0,len(jdepochs),block):\n", " obj = Horizons(id=source, location='F51', epochs=jdepochs[k:k+block])\n", " tlist.append(obj.ephemerides(extra_precision=True))\n", "tab2 = vstack(tlist)\n", "fullname = tab2['targetname'][0]\n", "print(f\"Got positions from Horizons for {len(jdepochs)} epochs with {len(tab2)} rows for {fullname}\")\n", "assert np.allclose(jdepochs, tab2['datetime_jd'].data)\n", "# convert output epochs from UTC back to TAI for PS1 database matching\n", "t2 = Time(tab2['datetime_jd'].data,format='jd',scale='utc').tai.mjd\n", "assert np.allclose(t2,cadc_tab[\"MJD\"].data)\n", "tab2[:8]" ] }, { "cell_type": "markdown", "id": "78465a5f", "metadata": {}, "source": [ "## Add new positions and differences to cadc_tab\n", "\n", "Also add the `illumination` (illumination percentage) and `delta` (range to object in AU), which are useful to understand apparent brightness." ] }, { "cell_type": "code", "execution_count": null, "id": "3a2eec70", "metadata": {}, "outputs": [], "source": [ "for c in ('ra','dec'):\n", " uc = c.upper()\n", " if c in cadc_tab.colnames:\n", " cadc_tab[c] = tab2[uc]\n", " else:\n", " cadc_tab.add_column(tab2[uc],name=c,index=cadc_tab.colnames.index('Object_RA'))\n", "\n", "cadc_tab['dra'] = (cadc_tab['ra']-cadc_tab['Object_RA'])*3600*np.cos(np.radians(cadc_tab['dec']))\n", "cadc_tab['ddec'] = (cadc_tab['dec']-cadc_tab['Object_Dec'])*3600\n", "for c in ('ra','dec','Object_RA','Object_Dec'):\n", " cadc_tab[c].format = \".6f\"\n", "for c in ('dra','ddec'):\n", " cadc_tab[c].format = \".2f\"\n", " cadc_tab[c].unit = \"arcsec\"\n", "\n", "cadc_tab['illumination'] = tab2['illumination']\n", "cadc_tab['illumination'].format = \".2f\"\n", "cadc_tab['delta'] = tab2['delta']\n", "cadc_tab['delta'].format = \".2f\"\n", "cadc_tab['delta'].unit = \"AU\"\n", "\n", "print(f\"dra range {cadc_tab['dra'].min():.2f} {cadc_tab['dra'].max():.2f}\"\n", " f\" MAD {np.median(np.abs(cadc_tab['dra'].data.filled())):.2f} arcsec\")\n", "print(f\"ddec range {cadc_tab['ddec'].min():.2f} {cadc_tab['ddec'].max():.2f}\"\n", " f\" MAD {np.median(np.abs(cadc_tab['ddec'].data.filled())):.2f} arcsec\")" ] }, { "cell_type": "markdown", "id": "5740e8b6", "metadata": {}, "source": [ "## Compute PS1 skycell position for each ra, dec value\n", "\n", "Add the `projcell` and `subcell` to the table.\n", "\n", "I could get this from the `ForcedWarpMeta` table in the database, but it is easier just to compute it from the RA and Dec values." ] }, { "cell_type": "code", "execution_count": null, "id": "9cf6d01d", "metadata": {}, "outputs": [], "source": [ "rv = findskycell(cadc_tab['ra'],cadc_tab['dec'])\n", "for c in rv.colnames:\n", " if c not in cadc_tab.colnames:\n", " cadc_tab[c] = rv[c]\n", "cadc_tab[\"crval1\"].format = \".6f\"\n", "for c in (\"x\",\"y\"):\n", " cadc_tab[c].format = \".2f\"\n", "cadc_tab[:8]" ] }, { "cell_type": "markdown", "id": "eeb9252d", "metadata": {}, "source": [ "## Remove duplicate warp image entries\n", "\n", "Note that several skycells can include a position. Those skycells, which come from the same single exposure, actually have the same data; they are not independent images. The `findskycell()` function just picks the best skycell (farthest from the edge). That is fine since each exposure that overlaps that skycell will generate a warp image.\n", "\n", "Find the duplicate entries in the table that are from overlapping skycells, and keep the one that matches the `projcell` and `subcell` values. This is a little tricky because the corrected positions can cause objects to move from one skycell to another. Handle that as well in getting a unique set of images." ] }, { "cell_type": "code", "execution_count": null, "id": "5e82a94d-9c4a-452f-a195-7c1c6e648537", "metadata": {}, "outputs": [], "source": [ "umjd, umjd_counts = np.unique(cadc_tab['MJD'], return_counts=True)\n", "nuniq = len(umjd)\n", "print(f\"cadc_tab has {len(cadc_tab)-nuniq} duplicate entries, leaving {nuniq} unique entries\")\n", "keep = np.array([image.startswith(f\"rings.v3.skycell.{projcell:04d}.{subcell:03d}.\") \n", " for image, projcell, subcell in zip(cadc_tab['Image'].data,cadc_tab['projcell'].data,cadc_tab['subcell'].data)])\n", "\n", "if keep.sum() != nuniq:\n", " # find which MJD values will disappear\n", " jtest = join(cadc_tab['MJD','Image','projcell','subcell'], cadc_tab['MJD','Image_target'][keep], join_type='left')\n", " w = np.where(jtest['Image_target'].mask)[0]\n", " if len(w) > nuniq-keep.sum():\n", " # this happens when there were multiple skycells matched from CADC, but they are\n", " # replaced by a single best skycell that is not either of the CADC entries\n", " # e.g. I found a case where CADC returned skycells 1589.003 and 1589.004 (at the same MJD)\n", " # but the best skycell is 1589.013\n", " # just keep one of the entries for duplicates at the same MJD\n", " j = len(\"rings.v3.skycell.nnnn.nnn.\")\n", " print(f\"First iteration of keep has {len(keep)} entries, {len(w)=}\")\n", " cmulti = {}\n", " nchanged = 0\n", " for i in w:\n", " tt = jtest['MJD'][i]\n", " cmulti[tt] = cmulti.get(tt,0) + 1\n", " if cmulti[tt] == 2:\n", " # change filename for this row\n", " projcell = jtest['projcell'][i]\n", " subcell = jtest['subcell'][i]\n", " image = jtest['Image'][i]\n", " kk = np.where((cadc_tab['MJD'] == tt) & \n", " (cadc_tab['projcell'] == projcell) &\n", " (cadc_tab['subcell'] == subcell) &\n", " (cadc_tab['Image'] == image))[0]\n", " assert len(kk)==1, f\"Trouble for {jtest['Image']} MJD {tt} {len(kk)=}\"\n", " kk = kk[0]\n", " oldname = cadc_tab['Image'][kk]\n", " if oldname.startswith(f\"rings.v3.skycell.{projcell:04d}.\"):\n", " cadc_tab['Image'][kk] = f\"rings.v3.skycell.{projcell:04d}.{subcell:03d}.{oldname[j:]}\"\n", " print(f\"Changed {oldname} to {cadc_tab['Image'][kk]}\")\n", " nchanged += 1\n", " if nchanged == 0:\n", " print(\"Warning: no changes in attempt to fix mismatch\")\n", " else:\n", " # recreate the keep and jtest arrays\n", " keep = np.array([image.startswith(f\"rings.v3.skycell.{projcell:04d}.{subcell:03d}.\") \n", " for image, projcell, subcell in zip(cadc_tab['Image'].data,cadc_tab['projcell'].data,cadc_tab['subcell'].data)])\n", " jtest = join(cadc_tab['MJD','Image','projcell','subcell'], cadc_tab['MJD','Image_target'][keep], join_type='left')\n", " w = np.where(jtest['Image_target'].mask)[0]\n", " print(f\"Second iteration of keep has {len(keep)} entries, {len(w)=} (changed {nchanged} names)\")\n", "\n", " assert len(w) == nuniq-keep.sum(), f\"Mismatch {len(w)=} {nuniq=} {keep.sum()=} {nuniq-keep.sum()=}\"\n", " assert (cadc_tab['Image'][w]==jtest['Image'][w]).all()\n", " # keep the image name from CADC, and change the projcell/subcell columns to match\n", " new_projcell = [int(x.split('.')[0]) for x in cadc_tab['Image_target'][w].data]\n", " new_subcell = [int(x.split('.')[1]) for x in cadc_tab['Image_target'][w].data]\n", " print(f\"Patching projcell and subcell for {len(w)} images\")\n", " cadc_tab['projcell'][w] = new_projcell\n", " cadc_tab['subcell'][w] = new_subcell\n", " # recompute the keep array, which now should be complete\n", " keep = np.array([image.startswith(f\"rings.v3.skycell.{projcell:04d}.{subcell:03d}.\") \n", " for image, projcell, subcell in zip(cadc_tab['Image'].data,cadc_tab['projcell'].data,cadc_tab['subcell'].data)])\n", " assert keep.sum() == nuniq, f\"ERROR: unable to fix discrepancy, keep.sum={keep.sum()} nuniq={nuniq}\"\n", " print(\"Modified lines\")\n", " print(cadc_tab['Image','MJD','Filter','Image_target','projcell','subcell'][w])\n", "\n", "cadc_tab = cadc_tab[keep]\n", "cadc_tab[:8]" ] }, { "cell_type": "markdown", "id": "0f13dbc1", "metadata": {}, "source": [ "## Extract and plot the full path of object from JPL Horizons\n", "\n", "This is not needed and can be skipped to save time. This takes about 6 seconds. It does result in an interesting plot, however.\n", "\n", "Query JPL Horizons to get the source position at one day intervals. Undo the RA-wrapping and plot the path for the 5 years of the PS1 survey. Mark the positions where PS1 observations were made." ] }, { "cell_type": "code", "execution_count": null, "id": "ad42b7ce", "metadata": {}, "outputs": [], "source": [ "%%time \n", "\n", "# PS1 epoch range with epoch edges set to round days\n", "emin = Time(54985.0, format='mjd')\n", "emax = Time(57079.0, format='mjd')\n", "\n", "print(f\"Searching for {source} from {emin.isot} to {emax.isot} ({emax.mjd-emin.mjd:.0f} days)\")\n", "\n", "# query using 1-day time steps\n", "obj = Horizons(id=source, location='F51', \n", " epochs=dict(start=emin.isot, stop=emax.isot, step='1d'))#, \n", "\n", "jpltab = obj.ephemerides(extra_precision=True)\n", "fullname = jpltab['targetname'][0]\n", "print(f\"Got positions from Horizons with {len(jpltab)} rows for {fullname}\")\n", "\n", "# undo RA-wrapping from 0-360 degrees\n", "# add or subtract multiples of 360 to create a smooth path\n", "njumps = 0\n", "ratab = jpltab['RA'].data.copy()\n", "w = np.where(ratab[1:] > ratab[:-1]+180)[0]\n", "for k in w:\n", " njumps += 1\n", " ratab[k+1:] -= 360\n", "w = np.where(ratab[1:] < ratab[:-1]-180)[0]\n", "for k in w:\n", " njumps += 1\n", " ratab[k+1:] += 360\n", "while ratab.min() < 0:\n", " ratab += 360\n", "\n", "plt.rcParams.update({\"font.size\":14})\n", "plt.figure(1,(12,6))\n", "plt.plot(ratab, jpltab['DEC'])\n", "\n", "# plot every 30 days as a dot to see movement\n", "mstep = int(30/(jpltab['datetime_jd'][1]-jpltab['datetime_jd'][0]) + 0.5)\n", "plt.plot(ratab[::mstep], jpltab['DEC'][::mstep], 'o', color=\"tab:blue\", label=\"Monthly positions\")\n", "\n", "# show positions at the times of PS1 observations\n", "jpl_mjd = Time(jpltab['datetime_jd'],format='jd').mjd\n", "ra_interp = np.interp(cadc_tab['MJD'], jpl_mjd, ratab)\n", "dec_interp = np.interp(cadc_tab['MJD'], jpl_mjd, jpltab['DEC'])\n", "plt.plot(ra_interp, dec_interp, 'o', label=f\"{len(cadc_tab)} PS1 observations\") \n", "\n", "plt.xlabel(\"Unwrapped RA [deg]\")\n", "plt.ylabel(\"Dec [deg]\")\n", "plt.legend(loc=\"best\")\n", "plt.title(f\"{fullname} from {emin.isot[:10]} to {emax.isot[:10]}\")\n", "plt.tight_layout()\n", "if saveplots:\n", " plt.savefig(\"sky_path.png\", facecolor=\"white\")" ] }, { "cell_type": "markdown", "id": "85086068", "metadata": {}, "source": [ "## Extract JPEG cutout images to see whether the object is there\n", "\n", "Note some images are blank or have artifacts near the source positions. Approximately half of the positions have data at the center. That will be typical for these moving object queries.\n", "\n", "Blank images have a minimum value of 255 (all white), so they are easily excluded. \n", "\n", "There are ~100 cutout images of which ~60 are not completely blank. That will be true for any object, although the exact numbers will vary depending on the fraction of observations where the object winds up in a gap in the camera." ] }, { "cell_type": "code", "execution_count": null, "id": "43e77f68", "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "imsize = 65 # cutout size in 0.25 arcsec pixels\n", "output_size = 130\n", "\n", "images = getimages(cadc_tab, size=imsize, output_size=output_size, verbose=True)\n", "print(f\"Got {len(images)} images\")\n", "\n", "# omit blank images\n", "goodimages = []\n", "igood = []\n", "for i, image in enumerate(images):\n", " if image is not None and np.array(image).min() != 255:\n", " goodimages.append(image)\n", " igood.append(i)\n", "ngood = len(goodimages)\n", "igood = np.array(igood)\n", "goodtab = cadc_tab[igood]\n", "# add SrcID\n", "goodtab.add_column(np.arange(len(goodtab)), name=\"SrcID\", index=0)\n", "print(f\"{len(goodimages)} of {len(images)} are not blank\")" ] }, { "cell_type": "markdown", "id": "b900a185", "metadata": {}, "source": [ "## Show non-blank images in a grid" ] }, { "cell_type": "code", "execution_count": null, "id": "e1520b86", "metadata": {}, "outputs": [], "source": [ "ncols = 7\n", "nrows = (ngood+ncols-1)//ncols\n", "hsize = ncols * 2.5\n", "vsize = nrows * 2.5\n", "extent = np.array([1,-1,-1,1])*0.25*imsize/2\n", "plt.rcParams.update({\"font.size\":10})\n", "plt.figure(1,(hsize,vsize))\n", "for i in range(ngood):\n", " plt.subplot(nrows,ncols,i+1)\n", " plt.imshow(goodimages[i], origin=\"upper\", cmap=\"gray\", vmin=0, vmax=255, extent=extent)\n", " plt.title(f\"{goodtab['Filter'][i]} {goodtab['MJD'][i]:.5f}\")\n", "plt.tight_layout()\n", "plt.suptitle(f\"{ngood} {imsize}x{imsize} pixel ({imsize/4:.0f}x{imsize/4:.0f} arcsec) PS1 images for {fullname}\",\n", " y=1.01, fontsize=16, fontweight=\"bold\")\n", "if saveplots:\n", " plt.savefig(\"images.png\",facecolor=\"white\", bbox_inches=\"tight\")" ] }, { "cell_type": "markdown", "id": "a46497cd", "metadata": {}, "source": [ "## Test: Extract some images at the original CADC positions to compare with the JPL positions\n", "\n", "Select the first 14 images with shifts small enough to see on the image. Then make a plot comparing the test images and the good images.\n", "\n", "The first and third rows use the CADC position, and the second and fourth rows use the refined JPL Horizons position. The image center is marked. Clearly the updated JPL positions are much better when there is a shift.\n", "\n", "This test is turned off by default since it is not needed for the example. Set `testing = True` to run it." ] }, { "cell_type": "code", "execution_count": null, "id": "a31d1a61", "metadata": {}, "outputs": [], "source": [ "testing = False\n", "if not testing:\n", " print(\"Not testing, skipped this step\")\n", "else:\n", " goodsep = np.sqrt(goodtab['dra'].data**2+goodtab['ddec'].data**2)\n", " itest = np.where(goodsep < 0.25*imsize/2)[0]\n", " itest = itest[:14]\n", "\n", " testimages = getimages(goodtab[itest], size=imsize, output_size=output_size, verbose=True,\n", " racol='Object_RA', deccol='Object_Dec')\n", " print(f\"Got {len(testimages)} test images\")\n", "\n", " ncols = 7\n", " nrows = (len(itest)*2+ncols-1)//ncols\n", " hsize = ncols * 2.5\n", " vsize = nrows * 2.5\n", " extent = np.array([-1,1,-1,1])*0.25*imsize/2\n", " plt.rcParams.update({\"font.size\":10})\n", " plt.figure(1,(hsize,vsize))\n", " for i in range(len(itest)):\n", " j = itest[i]\n", " plt.subplot(nrows,ncols,i+1+(i//ncols)*ncols)\n", " plt.imshow(testimages[i], origin=\"upper\", cmap=\"gray\", vmin=0, vmax=255, extent=extent)\n", " plt.plot(0.0, 0.0, 'o', color=\"tab:orange\", markerfacecolor=\"none\", markersize=15)\n", " plt.title(f\"{goodtab['SrcID'][j]}: CADC {goodtab['Filter'][j]} {goodtab['MJD'][j]:.5f}\", color=\"tab:red\")\n", " plt.subplot(nrows,ncols,i+1+(i//ncols+1)*ncols)\n", " plt.imshow(goodimages[j], origin=\"upper\", cmap=\"gray\", vmin=0, vmax=255, extent=extent)\n", " plt.plot(0.0, 0.0, 'o', color=\"tab:blue\", markerfacecolor=\"none\", markersize=15)\n", " plt.title(f\"{goodtab['SrcID'][j]}: JPL {goodtab['Filter'][j]} {goodtab['MJD'][j]:.5f}\", color=\"tab:blue\")\n", " plt.tight_layout()\n", " _ = plt.suptitle(f\"{ngood} {imsize}x{imsize} pixel ({imsize/4:.0f}x{imsize/4:.0f} arcsec) PS1 images for {fullname}\",\n", " y=1.01, fontsize=16, fontweight=\"bold\")" ] }, { "cell_type": "markdown", "id": "fb091190", "metadata": {}, "source": [ "## Upload the final match table and extract Detection table information\n", "\n", "This creates a table `MyDB.jplhorizons3` in CasJobs and then does a cross-match to the `ObjectThin` and `Detection` tables.\n", "\n", "The constraints are:\n", "* Search radius 2 arcsec\n", "* MJD match within half of exposure time (typically 15-20 sec)\n", "\n", "The matched objects that are found are never close to these limits.\n", "\n", "Note the `filterID` is not included in the constraints. It could be included, but that does not seem necessary given the tight constraint on MJD. There is only one exposure in a single filter at any given time. The next code blocks checks for any filter mismatches, but should not ever find any problems unless there are errors in the database.\n", "\n", "If you do not want to see the images, this crossmatch can work from the original `cadc_tab` rather than the good images list in `goodtab`. That will return the same number of matches." ] }, { "cell_type": "code", "execution_count": null, "id": "c49e181b", "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "# convert table to a csv string\n", "cnames = [\"SrcID\", \"ra\", \"dec\", \"Filter\", \"MJD\"]\n", "with StringIO() as fh:\n", " goodtab[cnames].write(fh, format=\"ascii.csv\")\n", " hdata = fh.getvalue()\n", "\n", "table = \"jplhorizons3\"\n", "\n", "jobs = mastcasjobs.MastCasJobs(context=\"MyDB\", request_type=\"POST\")\n", "jobs.drop_table_if_exists(table)\n", "print(f\"Dropped MyDB.{table}\")\n", "\n", "jobs.upload_table(table, hdata, exists=False)\n", "print(f\"Uploaded table to MyDB.{table}\")\n", " \n", "query = f\"\"\"\n", "select t.SrcID, dbo.fDistanceArcminEq(d.ra,d.dec,t.ra,t.dec)*60 as darcsec,\n", " t.ra as hra, t.dec as hdec,\n", " t.filter as hfilter, t.mjd as hmjd,\n", " d.objID, d.detectID, d.filterID, d.imageID, d.obsTime, d.ra, d.dec,\n", " d.raErr, d.decErr, d.expTime, d.psfFlux, d.psfFluxErr, \n", " d.psfMajorFWHM, d.psfMinorFWHM, d.psfTheta, d.psfCore, \n", " d.psfQfPerfect, d.psfChiSq, d.apFlux, d.apFluxErr, \n", " d.infoFlag, d.infoFlag2, d.infoFlag3\n", "from MyDB.{table} t \n", "cross apply fGetNearbyObjEq(t.ra, t.dec, 2.0/60.0) nb\n", "join Detection d on nb.objid=d.objid and d.obsTime between t.mjd-0.5*d.expTime/86400 and t.mjd+0.5*d.expTime/86400\n", "order by d.obsTime, darcsec\n", "\"\"\"\n", "\n", "dtab = jobs.quick(query, context=\"PanSTARRS_DR2\")\n", "print(f\"Retrieved {len(dtab)} row table\")\n", "\n", "dtab['filterID'] = dtab['filterID'].astype(np.uint8)\n", "dtab['darcsec'].format = \".2f\"\n", "for c in ('hmjd','obsTime'):\n", " dtab[c].format = \".5f\"\n", "dtab" ] }, { "cell_type": "markdown", "id": "f6d2f372", "metadata": {}, "source": [ "## Check for mismatches in filter or time\n", "\n", "Also checks to see whether any `SrcID` entries have duplicate matches. That can happen if the object happens to be moving through a very crowded region such as the Galactic plane. If there are multiple matches, the closest one is retained. (You might want to look more carefully at such cases, however.)" ] }, { "cell_type": "code", "execution_count": null, "id": "17db93b2", "metadata": {}, "outputs": [], "source": [ "print((np.abs(dtab['hmjd']-dtab['obsTime']) > 0.00002).sum(), \"sources with MJD discrepancy\")\n", "print((dtab['filterID'] != [(\"grizy\".find(x)+1) for x in dtab['hfilter']]).sum(), \"sources with filter discrepancy\")\n", "ndup = len(dtab)-len(np.unique(dtab['SrcID']))\n", "print(f\"{ndup} duplicate matches to same SrcID\")\n", "if ndup > 0:\n", " # pick closer of each duplicate entry\n", " dtab.sort(['SrcID', 'darcsec'])\n", " h = np.bincount(dtab['SrcID'])\n", " w = np.where(h > 1)[0]\n", " keep = np.ones(len(dtab), dtype=bool)\n", " for s in w:\n", " ww = np.where(dtab['SrcID']==s)[0]\n", " keep[ww[1:]] = False\n", " dtab = dtab[keep]\n", " print(f\"Retaining {len(dtab)} entries with only closest for dups\")\n", " \n", "print(f\"Maximum epoch difference: {(dtab['hmjd']-dtab['obsTime']).max()*86400:.4f} s\")" ] }, { "cell_type": "markdown", "id": "9f4639df", "metadata": {}, "source": [ "## Scatterplot showing position differences between PS1 Detection and JPL Horizons\n", "\n", "The position differences will be a combination of PS1 position errors and uncertainties in the object's ephemeris. The `psfQfPerfect` flag is used to highlight likely poor quality detections that have bad pixels in the core of the PSF. They account for the points that have large position errors." ] }, { "cell_type": "code", "execution_count": null, "id": "a13eafcd", "metadata": {}, "outputs": [], "source": [ "plt.rcParams.update({\"font.size\":14})\n", "plt.figure(1,(12,12))\n", "wgood = np.where(dtab['psfQfPerfect'] > 0.95)[0]\n", "wbad = np.where(dtab['psfQfPerfect'] <= 0.95)[0]\n", "dra = (dtab['ra']-dtab['hra'])*3600*np.cos(np.radians(dtab['hdec']))\n", "ddec = (dtab['dec']-dtab['hdec'])*3600\n", "plt.plot(dra[wgood], ddec[wgood], 'o', label=f\"{len(wgood)} good points\")\n", "plt.plot(dra[wbad], ddec[wbad], 'o', label=f\"{len(wbad)} bad points\")\n", "plt.xlabel(\"Delta RA [arcsec]\")\n", "plt.ylabel(\"Delta Dec [arcsec]\")\n", "plt.gca().set_aspect(1.0)\n", "plt.legend(loc=\"best\", title=\"psfQfPerfect flags\")\n", "plt.title(f\"PS1 catalog detections for {fullname}\")\n", "if saveplots:\n", " plt.savefig(\"position-diffs.png\", facecolor=\"white\", bbox_inches=\"tight\")" ] }, { "cell_type": "markdown", "id": "d4bc8052", "metadata": {}, "source": [ "## Show images with sources in catalog marked\n", "\n", "The shifts are usually too small to see, but this does show which of the images have PS1 detections and which do not.\n", "\n", "Magnitudes are shown in the plot titles for detected objects. A magnitude followed by `*` is dubious because it has `psfQfPerfect <= 0.95`, indicating that there are bad pixels in the core of the object. The marker uses a different color to highlight those cases.\n", "\n", "
\n", " Note: \n", " The plotted positions are not strictly accurate. The cutouts are centered on the nearest pixel from the PS1 image, which means they can be shifted by up to half a pixel (0.125 arcsec) from the requested coordinate system. It would be more accurate to use the WCS for the cutout (which is available for FITS cutouts or in the comment section of JPEG cutouts) to compute the marker position.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "0cab5f6d", "metadata": {}, "outputs": [], "source": [ "goodmatch = join(goodtab, dtab['SrcID','ra','dec','psfFlux','psfQfPerfect'], keys='SrcID', join_type='left')\n", "# offset of source from center\n", "dx = (goodmatch['ra_2']-goodmatch['ra_1'])*3600*np.cos(np.radians(goodmatch['dec_1']))\n", "dy = (goodmatch['dec_2']-goodmatch['dec_1'])*3600\n", "\n", "# convert the fluxes to AB magnitudes\n", "mag = -2.5*np.log10(goodmatch['psfFlux']) + 8.90\n", "goodmatch['mag'] = mag\n", "\n", "ncols = 7\n", "nrows = (ngood+ncols-1)//ncols\n", "hsize = ncols * 2.5\n", "vsize = nrows * 2.5\n", "extent = np.array([1,-1,-1,1])*0.25*imsize/2\n", "\n", "plt.rcParams.update({\"font.size\":10})\n", "plt.figure(1,(hsize,vsize))\n", "for i in range(ngood):\n", " plt.subplot(nrows,ncols,i+1)\n", " plt.imshow(goodimages[i], origin=\"upper\", cmap=\"gray\", vmin=0, vmax=255, extent=extent)\n", " if hasattr(goodmatch['ra_2'],'mask') and goodmatch['ra_2'].mask[i]:\n", " # no detection\n", " plt.title(f\"{goodmatch['Filter'][i]} {goodmatch['MJD'][i]:.5f}\")\n", " else:\n", " # show detection\n", " if goodmatch['psfQfPerfect'][i]<=0.95:\n", " # dubious magnitude\n", " warning = '*'\n", " color = \"tab:orange\"\n", " tcolor = color\n", " else:\n", " warning = ''\n", " color = \"tab:cyan\"\n", " tcolor = \"tab:blue\"\n", " plt.plot(dx[i], dy[i], 'o', color=color, markerfacecolor=\"none\", markersize=15, markeredgewidth=2)\n", " plt.title(f\"{goodmatch['Filter'][i]}={mag[i]:.2f}{warning} {goodmatch['MJD'][i]:.5f}\",\n", " color=tcolor)\n", "plt.tight_layout()\n", "# print(f\"{ngood} {imsize}x{imsize} pixel ({imsize/4:.0f}x{imsize/4:.0f} arcsec) PS1 images for {fullname}\")\n", "plt.suptitle(f\"{ngood} {imsize}x{imsize} pixel ({imsize/4:.0f}x{imsize/4:.0f} arcsec) PS1 images for {fullname}\",\n", " y=1.01, fontsize=16, fontweight=\"bold\")\n", "if saveplots:\n", " plt.savefig(\"images-catalogs.png\", facecolor=\"white\", bbox_inches=\"tight\")" ] }, { "cell_type": "markdown", "id": "d42c1b0e", "metadata": {}, "source": [ "## Plot light curve\n", "\n", "This removes poor quality measurements using the `psfQfPerfect > 0.95` test. It also shows a plot with the magnitudes corrected using a simple model determined by the illumination fraction and distance to the object.\n", "\n", "Note that the model does account for most of the variation in apparent brightness." ] }, { "cell_type": "code", "execution_count": null, "id": "8f253764", "metadata": {}, "outputs": [], "source": [ "jdtab = join(dtab, goodtab['SrcID','illumination','delta'])\n", "assert len(jdtab) == len(dtab)\n", "jdtab['mag'] = -2.5*np.log10(jdtab['psfFlux']) + 8.90\n", "\n", "# illumination magnitude factor\n", "jdtab['illumfac'] = -2.5*np.log10(jdtab['illumination']/jdtab['delta']**2)\n", "jdtab['illumfac'] -= jdtab['illumfac'].mean()\n", "\n", "fdtab = jdtab.group_by('filterID')\n", "\n", "plt.rcParams.update({\"font.size\":14})\n", "plt.figure(1,(10,10))\n", "sub1 = plt.subplot(2,1,1)\n", "for g in fdtab.groups:\n", " hfilter = g['hfilter'][0]\n", " ww = np.where(g['psfQfPerfect']>0.95)[0]\n", " lines = plt.plot(g['hmjd'][ww], g['mag'][ww], 'o-',label=f\"{hfilter} ({len(ww)} pts)\")\n", "# plt.xlabel(\"MJD [days]\")\n", "plt.ylabel(\"Brightness [mag]\")\n", "ylim = plt.ylim()[::-1]\n", "plt.legend(loc=\"upper left\", fontsize=12)\n", "plt.title(f\"PS1 light curve for {fullname}\")\n", "\n", "sub2 = plt.subplot(2,1,2)\n", "for g in fdtab.groups:\n", " hfilter = g['hfilter'][0]\n", " ww = np.where(g['psfQfPerfect']>0.95)[0]\n", " plt.plot(g['hmjd'][ww], g['mag'][ww]-g['illumfac'][ww], 'o-', label=f\"{hfilter} ({len(ww)} pts)\")\n", "plt.xlabel(\"MJD [days]\")\n", "ylim2 = plt.ylim()[::-1]\n", "ylim = (max(ylim[0],ylim2[0]), min(ylim[1],ylim2[1]))\n", "sub2.set_ylim(ylim)\n", "sub1.set_ylim(ylim)\n", "plt.ylabel(\"Brightness - Prediction [mag]\")\n", "plt.legend(loc=\"upper left\", fontsize=12)\n", "plt.title(\"Light curve with distance and illumination effects removed\")\n", "plt.tight_layout()\n", "if saveplots:\n", " plt.savefig(\"light-curves.png\", facecolor=\"white\")" ] }, { "cell_type": "code", "execution_count": null, "id": "958a126a", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.8" } }, "nbformat": 4, "nbformat_minor": 5 }