Reference#
A friendly wrapper to load time-series spectra and/or
multiwavelength light curves into a chromatic
Rainbow
object. It will try its best to pick the best reader
and return the most useful kind of object.
🦋🌅2️⃣🪜🎬👀🇮🇹📕🧑🏫🌈
Parameters#
filepath : str, list
The file or files to open.
**kw : dict
All other keyword arguments will be passed to
the Rainbow
initialization.
Returns#
rainbow : Rainbow, RainbowWithModel The loaded data!
Source code in chromatic/rainbows/__init__.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Rainbow
(🌈) objects represent brightness as a function
of both wavelength and time.
These objects are useful for reading or writing multiwavelength
time-series datasets in a variety of formats, visualizing these
data with simple commands, and performing basic calculations.
RainbowWithModel
and SimulatedRainbow
objects inherit from
Rainbow
, so all basically all methods and attributes described
below are available for them too.
Attributes#
wavelike : dict
A dictionary for quantities with shape (nwave,),
for which there's one value for each wavelength.
timelike : dict
A dictionary for quantities with shape
(ntime,),
for which there's one value for each time.
fluxlike : dict
A dictionary for quantities with shape (nwave,ntime),
for which there's one value for each wavelength and time.
metadata : dict
A dictionary containing all other useful information
that should stay connected to the
Rainbow, in any format.
wavelength : Quantity
The 1D array of wavelengths for this
Rainbow.
(This is a property, not an actual attribute.)
time : Quantity
The 1D array of times for this
Rainbow.
(This is a property, not an actual attribute.)
flux : array, Quantity
The 2D array of fluxes for this
Rainbow.
(This is a property, not an actual attribute.)
uncertainty : array, Quantity
The 2D array of flux uncertainties for this
Rainbow.
(This is a property, not an actual attribute.)
ok : array
The 2D array of "ok-ness" for this
Rainbow.
(This is a property, not an actual attribute.)
shape : tuple
The shape of this
Rainbow's flux array.
(This is a property, not an actual attribute.)
nwave : int
The number of wavelengths in this
Rainbow'.
(This is a property, not an actual attribute.)
ntime : int
The number of times in this
Rainbow'.
(This is a property, not an actual attribute.)
nflux : int
The total number of fluxes in this
Rainbow' (=
nwave*ntime).
(This is a property, not an actual attribute.)
dt : Quantity
The typical time offset between adjacent times in this
Rainbow.
(This is a property, not an actual attribute.)
name : array, Quantity
The name of this
Rainbow`, if one has been set.
(This is a property, not an actual attribute.)
Source code in chromatic/rainbows/rainbow.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 |
|
dt
property
#
The typical timestep.
flux
property
#
The 2D array of fluxes (row = wavelength, col = time).
name
property
#
The name of this Rainbow
object.
nflux
property
#
The total number of fluxes.
ntime
property
#
The number of times.
nwave
property
#
The number of wavelengths.
ok
property
#
The 2D array of whether data is OK (row = wavelength, col = time).
shape
property
#
The shape of the flux array (nwave, ntime).
time
property
#
The 1D array of time (with astropy units of time).
uncertainty
property
#
The 2D array of uncertainties on the fluxes.
wavelength
property
#
The 1D array of wavelengths (with astropy units of length).
__getattr__(key)
#
If an attribute/method isn't explicitly defined, try to pull it from one of the core dictionaries.
Let's say you want to get the 2D uncertainty array
but don't want to type self.fluxlike['uncertainty']
.
You could instead type self.uncertainty
, and this
would try to search through the four standard
dictionaries to pull out the first uncertainty
it finds.
Parameters#
key : str The attribute we're trying to get.
Source code in chromatic/rainbows/rainbow.py
625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 |
|
__getitem__(key)
#
Trim a rainbow by indexing, slicing, or masking.
Two indices must be provided ([:,:]
).
Examples#
r[:,:]
r[10:20, :]
r[np.arange(10,20), :]
r[r.wavelength > 1*u.micron, :]
r[:, np.abs(r.time) < 1*u.hour]
r[r.wavelength > 1*u.micron, np.abs(r.time) < 1*u.hour]
Parameters#
key : tuple The (wavelength, time) slices, indices, or masks.
Source code in chromatic/rainbows/rainbow.py
880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 |
|
__init__(filepath=None, format=None, wavelength=None, time=None, flux=None, uncertainty=None, wavelike=None, timelike=None, fluxlike=None, metadata=None, name=None, **kw)
#
Initialize a Rainbow
object.
The __init__
function is called when a new Rainbow
is
instantiated as r = Rainbow(some, kinds, of=inputs)
.
The options for inputs are flexible, including the possibility to initialize from a file, from arrays with appropriate units, from dictionaries with appropriate ingredients, or simply as an empty object if no arguments are given.
Parameters#
filepath : str, optional
The filepath pointing to the file or group of files
that should be read.
format : str, optional
The file format of the file to be read. If None,
the format will be guessed automatically from the
filepath.
wavelength : Quantity, optional
A 1D array of wavelengths, in any unit.
time : Quantity, Time, optional
A 1D array of times, in any unit.
flux : array, optional
A 2D array of flux values.
uncertainty : array, optional
A 2D array of uncertainties, associated with the flux.
wavelike : dict, optional
A dictionary containing 1D arrays with the same
shape as the wavelength axis. It must at least
contain the key 'wavelength', which should have
astropy units of wavelength associated with it.
timelike : dict, optional
A dictionary containing 1D arrays with the same
shape as the time axis. It must at least
contain the key 'time', which should have
astropy units of time associated with it.
fluxlike : dict, optional
A dictionary containing 2D arrays with the shape
of (nwave, ntime), like flux. It must at least
contain the key 'flux'.
metadata : dict, optional
A dictionary containing all other metadata
associated with the dataset, generally lots of
individual parameters or comments.
**kw : dict, optional
Additional keywords will be passed along to
the function that initializes the rainbow.
If initializing from arrays (time=
, wavelength=
,
...), these keywords will be interpreted as
additional arrays that should be sorted by their
shape into the appropriate dictionary. If
initializing from files, the keywords will
be passed on to the reader.
Examples#
Initialize from a file. While this works, a more robust
solution is probably to use read_rainbow
, which will
automatically choose the best of Rainbow
and RainbowWithModel
r1 = Rainbow('my-neat-file.abc', format='abcdefgh')
Initalize from arrays. The wavelength and time must have
appropriate units, and the shape of the flux array must
match the size of the wavelength and time arrays. Other
arrays that match the shape of any of these quantities
will be stored in the appropriate location. Other inputs
not matching any of these will be stored as metadata.
r2 = Rainbow(
wavelength=np.linspace(1, 5, 50)*u.micron,
time=np.linspace(-0.5, 0.5, 100)*u.day,
flux=np.random.normal(0, 1, (50, 100)),
some_other_array=np.ones((50,100)),
some_metadata='wow!'
)
Initialize from dictionaries. The dictionaries must contain
at least wavelike['wavelength']
, timelike['time']
, and
fluxlike['flux']
, but any other additional inputs can be
provided.
r3 = Rainbow(
wavelike=dict(wavelength=np.linspace(1, 5, 50)*u.micron),
timelike=dict(time=np.linspace(-0.5, 0.5, 100)*u.day),
fluxlike=dict(flux=np.random.normal(0, 1, (50, 100)))
)
Source code in chromatic/rainbows/rainbow.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 |
|
__repr__()
#
How should this object be represented as a string?
Source code in chromatic/rainbows/rainbow.py
942 943 944 945 946 947 948 949 |
|
__setattr__(key, value)
#
When setting a new attribute, try to sort it into the appropriate core directory based on its size.
Let's say you have some quantity that has the same shape as the wavelength array and you'd like to attach it to this Rainbow object. This will try to save it in the most relevant core dictionary (of the choices timelike, wavelike, fluxlike).
Parameters#
key : str The attribute we're trying to get. value : array The quantity we're trying to attach to that name.
Source code in chromatic/rainbows/rainbow.py
651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 |
|
Bases: Rainbow
RainbowWithModel
objects have a fluxlike model
attached to them, meaning that they can
This class definition inherits from Rainbow
.
Source code in chromatic/rainbows/withmodel.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
|
chi_squared
property
#
Calculate $\chi^2$.
This calculates the sum of the squares of the uncertainty-normalized residuals, sum(((flux - model)/uncertainty)**2)
Data points marked as not OK are ignored.
Returns#
chi_squared : float The chi-squared value.
ones
property
#
Generate an array of ones that looks like the flux.
(A tiny wrapper needed for plot_with_model
)
Returns#
ones : array, Quantity The 2D array ones (nwave, ntime).
Bases: RainbowWithModel
SimulatedRainbow
objects are created from scratch
within chromatic
, with options for various different
wavelength grids, time grids, noise sources, and injected
models. They can be useful for generating quick simulated
dataset for testing analysis and visualization tools.
This class definition inherits from RainbowWithModel
,
which itself inherits from Rainbow
.
Source code in chromatic/rainbows/simulated.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
|
__init__(tlim=[-2.5, 2.5] * u.hour, dt=2 * u.minute, time=None, wlim=[0.5, 5] * u.micron, R=100, dw=None, wavelength=None, star_flux=None, name=None, signal_to_noise=None)
#
Initialize a SimulatedRainbow
object from some parameters.
This sets up an effectively empty Rainbow
with defined
wavelengths and times. For making more interesting
simulated datasets, this will often be paired with
some combination of the .inject...
actions that inject
various astrophysical, instrumental, or noise signatures
into the dataset.
The time-setting order of precendence is
1) time 2) tlim + dt
The wavelength-setting order of precendence is
1) wavelength 2) wlim + dw 3) wlim + R
Parameters#
tlim : list or Quantity The pip install -e '.[develop]'[min, max] times for creating the time grid. These should have astropy units of time. dt : Quantity The d(time) bin size for creating a grid that is uniform in linear space. time : Quantity An array of times, if you just want to give it an entirely custom array. wlim : list or Quantity The [min, max] wavelengths for creating the grid. These should have astropy units of wavelength. R : float The spectral resolution for creating a grid that is uniform in logarithmic space. dw : Quantity The d(wavelength) bin size for creating a grid that is uniform in linear space. wavelength : Quantity An array of wavelengths, if you just want to give it an entirely custom array. star_flux : numpy 1D array An array of fluxes corresponding to the supplied wavelengths. If left blank, the code assumes a normalized flux of flux(wavelength) = 1 for all wavelengths.
Source code in chromatic/rainbows/simulated.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
🌈 Helpers#
Retrieve an attribute by its string name.
(This is a friendlier wrapper for getattr()
).
r.get('flux')
is identical to r.flux
This is different from indexing directly into
a core dictionary (for example, r.fluxlike['flux']
),
because it can also be used to get the results of
properties that do calculations on the fly (for example,
r.residuals
in the RainbowWithModel
class).
Parameters#
key : str The name of the attribute, property, or core dictionary item to get. default : any, optional What to return if the attribute can't be found.
Returns#
thing : any
The thing you were trying to get. If unavailable,
return the default
(which by default is None
)
Source code in chromatic/rainbows/helpers/get.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
Print a quick reference of key actions available for this Rainbow
.
Source code in chromatic/rainbows/helpers/help.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
Return a summary of the history of actions that have gone into this Rainbow
.
Returns#
history : str
A string that does its best to try to summarize
all the actions that have been applied to this
Rainbow
object from the moment it was created.
In some (but not all) cases, it may be possible
to copy, paste, and rerun this code to recreate
the Rainbow
.
Source code in chromatic/rainbows/helpers/history.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
|
Save this Rainbow
out to a file.
Parameters#
filepath : str
The filepath pointing to the file to be written.
(For now, it needs a .rainbow.npy
extension.)
format : str, optional
The file format of the file to be written. If None
,
the format will be guessed automatically from the
filepath.
**kw : dict, optional
All other keywords will be passed to the writer.
Source code in chromatic/rainbows/helpers/save.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
🌈 Actions#
Use 2D wavelength information to align onto a single 1D wavelength array.
This relies on the existence of a .fluxlike['wavelength_2d']
array,
expressing the wavelength associated with each flux element.
Those wavelengths will be used to (a) establish a new compromise
wavelength grid and (b) bin the individual timepoints onto that
new grid, effectively shifting the wavelengths to align.
Parameters#
minimum_acceptable_ok : float, optional
The numbers in the .ok
attribute express "how OK?" each
data point is, ranging from 0 (not OK) to 1 (super OK).
In most cases, .ok
will be binary, but there may be times
where it's intermediate (for example, if a bin was created
from some data that were not OK and some that were).
The minimum_acceptable_ok
parameter allows you to specify what
level of OK-ness for a point to go into the binning.
Reasonable options may include:
minimum_acceptable_ok = 1
Only data points that are perfectly OK
will go into the binning. All other points
will effectively be interpolated over. Flux
uncertainties should be inflated appropriately,
but it's very possible to create correlated
bins next to each other if many of your ingoing
data points are not perfectly OK.
minimum_acceptable_ok = 1
All data points that aren't definitely not OK
will go into the binning. The OK-ness of points
will propagate onward for future binning.
minimum_acceptable_ok < 0
All data points will be included in the bin.
The OK-ness will propagate onward.
wscale : str, optional
What kind of a new wavelength axis should be created?
Options include:
'linear' = constant d[wavelength] between grid points
'log' = constant d[wavelength]/[wavelength] between grid points
'nonlinear' = the median wavelength grid for all time points
supersampling : float, optional
By how many times should we increase or decrease the wavelength sampling?
In general, values >1 will split each input wavelength grid point into
multiple supersampled wavelength grid points, values close to 1 will
produce approximately one output wavelength for each input wavelength,
and values <1 will average multiple input wavelengths into a single output
wavelength bin.
Unless this is significantly less than 1, there's a good chance your output
array may have strong correlations between one or more adjacent wavelengths.
Be careful when trying to use the resulting uncertainties!
visualize : bool
Should we make some plots showing how the shared wavelength
axis compares to the original input wavelength axes?
Returns#
rainbow : RainbowWithModel
A new RainbowWithModel
object, with the model attached.
Source code in chromatic/rainbows/actions/align_wavelengths.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
|
Attach a fluxlike
model, thus making a new RainbowWithModel.
Having a model attached makes it possible to make calculations (residuals, chi^2) and visualizations comparing data to model.
The model
array will be stored in .fluxlike['model']
.
After running this to make a RainbowWithModel
it's OK
(and faster) to simply update .fluxlike['model']
or .model
.
Parameters#
model : array, Quantity
An array of model values, with the same shape as 'flux'
**kw : dict, optional
All other keywords will be interpreted as items
that can be added to a Rainbow
. You might use this
to attach intermediate model steps or quantities.
Variable names ending with _model
can be particularly
easily incorporated into multi-part model visualizations
(for example, 'planet_model'
or 'systematics_model'
).
Returns#
rainbow : RainbowWithModel
A new RainbowWithModel
object, with the model attached.
Source code in chromatic/rainbows/actions/attach_model.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
bin(self, dt=None, time=None, time_edges=None, ntimes=None, R=None, dw=None, wavelength=None, wavelength_edges=None, nwavelengths=None, minimum_acceptable_ok=1, minimum_points_per_bin=None, trim=True)
#
Bin in wavelength and/or time.
Average together some number of adjacent data points, in wavelength and/or time. For well-behaved data where data points are independent from each other, binning down by N data points should decrease the noise per bin by approximately 1/sqrt(N), making it easier to see subtle signals. To bin data points together, data are combined using inverse-variance weighting through interpolation of cumulative distributions, in an attempt to make sure that flux integrals between limits are maintained.
Currently, the inverse-variance weighting is most reliable only for datasets that have been normalized to be close to 1. We still need to do a little work to make sure it works well on unnormalized datasets with dramatically non-uniform uncertainties.
By default, time binning happens before wavelength binning.
To control the order, use separate calls to .bin()
.
The time-setting order of precendence is
[time_edges
, time
, dt
, ntimes
]
The first will be used, and others will be ignored.
The wavelength-setting order of precendence is
[wavelength_edges
, wavelength
, dw
, R
, nwavelengths
]
The first will be used, and others will be ignored.
Parameters#
dt : Quantity
The d(time) bin size for creating a grid
that is uniform in linear space.
time : Quantity
An array of times, if you just want to give
it an entirely custom array.
The widths of the bins will be guessed from the centers
(well, if the spacing is uniform constant; pretty well
but not perfectly otherwise).
time_edges : Quantity
An array of times for the edges of bins,
if you just want to give an entirely custom array.
The bins will span time_edges[:-1]
to
time_edges[1:]
, so the resulting binned
Rainbow will have len(time_edges) - 1
time bins associated with it.
ntimes : int
A fixed number of time to bin together.
Binning will start from the 0th element of the
starting times; if you want to start from
a different index, trim before binning.
R : float
The spectral resolution for creating a grid
that is uniform in logarithmic space.
dw : Quantity
The d(wavelength) bin size for creating a grid
that is uniform in linear space.
wavelength : Quantity
An array of wavelengths for the centers of bins,
if you just want to give an entirely custom array.
The widths of the bins will be guessed from the centers
(well, if the spacing is uniform constant; pretty well
but not perfectly otherwise).
wavelength_edges : Quantity
An array of wavelengths for the edges of bins,
if you just want to give an entirely custom array.
The bins will span wavelength_edges[:-1]
to
wavelength_edges[1:]
, so the resulting binned
Rainbow will have len(wavelength_edges) - 1
wavelength bins associated with it.
nwavelengths : int
A fixed number of wavelengths to bin together.
Binning will start from the 0th element of the
starting wavelengths; if you want to start from
a different index, trim before binning.
minimum_acceptable_ok : float
The numbers in the .ok
attribute express "how OK?" each
data point is, ranging from 0 (not OK) to 1 (super OK).
In most cases, .ok
will be binary, but there may be times
where it's intermediate (for example, if a bin was created
from some data that were not OK and some that were).
The minimum_acceptable_ok
parameter allows you to specify what
level of OK-ness for a point to go into the binning.
Reasonable options may include:
minimum_acceptable_ok = 1
Only data points that are perfectly OK
will go into the binning.
minimum_acceptable_ok = 1e-10
All data points that aren't definitely not OK
will go into the binning.
minimum_acceptable_ok = 0
All data points will be included in the bin.
minimum_points_per_bin : float
If you're creating bins that are smaller than those in
the original dataset, it's possible to end up with bins
that effectively contain fewer than one original datapoint
(in the sense that the contribution of one original datapoint
might be split across multiple new bins). By default,
we allow this behavior with minimum_points_per_bin=0
, but you can
limit your result to only bins that contain one or more
original datapoints with minimum_points_per_bin=1
.
trim : bool
Should any wavelengths or columns that end up
as entirely nan be trimmed out of the result?
(default = True)
Returns#
binned : Rainbow
The binned Rainbow
.
Source code in chromatic/rainbows/actions/binning.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
|
bin_in_time(self, dt=None, time=None, time_edges=None, ntimes=None, minimum_acceptable_ok=1, minimum_points_per_bin=None, trim=True)
#
Bin in time.
The time-setting order of precendence is
[time_edges
, time
, dt
, ntimes
]
The first will be used, and others will be ignored.
Parameters#
dt : Quantity
The d(time) bin size for creating a grid
that is uniform in linear space.
time : Quantity
An array of times, if you just want to give
it an entirely custom array.
The widths of the bins will be guessed from the centers
(well, if the spacing is uniform constant; pretty well
but not perfectly otherwise).
time_edges : Quantity
An array of times for the edges of bins,
if you just want to give an entirely custom array.
The bins will span time_edges[:-1]
to
time_edges[1:]
, so the resulting binned
Rainbow
will have len(time_edges) - 1
time bins associated with it.
ntimes : int
A fixed number of time to bin together.
Binning will start from the 0th element of the
starting times; if you want to start from
a different index, trim before binning.
minimum_acceptable_ok : float
The numbers in the .ok
attribute express "how OK?" each
data point is, ranging from 0 (not OK) to 1 (super OK).
In most cases, .ok
will be binary, but there may be times
where it's intermediate (for example, if a bin was created
from some data that were not OK and some that were).
The minimum_acceptable_ok
parameter allows you to specify what
level of OK-ness for a point to go into the binning.
Reasonable options may include:
minimum_acceptable_ok = 1
Only data points that are perfectly OK
will go into the binning.
minimum_acceptable_ok = 1e-10
All data points that aren't definitely not OK
will go into the binning.
minimum_acceptable_ok = 0
All data points will be included in the bin.
minimum_points_per_bin : float
If you're creating bins that are smaller than those in
the original dataset, it's possible to end up with bins
that effectively contain fewer than one original datapoint
(in the sense that the contribution of one original datapoint
might be split across multiple new bins). By default,
we allow this behavior with minimum_points_per_bin=0
, but you can
limit your result to only bins that contain one or more
original datapoints with minimum_points_per_bin=1
.
trim : bool
Should any wavelengths or columns that end up
as entirely nan be trimmed out of the result?
(default = True)
Returns#
binned : Rainbow
The binned Rainbow
.
Source code in chromatic/rainbows/actions/binning.py
214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 |
|
bin_in_wavelength(self, R=None, dw=None, wavelength=None, wavelength_edges=None, nwavelengths=None, minimum_acceptable_ok=1, minimum_points_per_bin=None, trim=True, starting_wavelengths='1D')
#
Bin in wavelength.
The wavelength-setting order of precendence is
[wavelength_edges
, wavelength
, dw
, R
, nwavelengths
]
The first will be used, and others will be ignored.
Parameters#
R : float
The spectral resolution for creating a grid
that is uniform in logarithmic space.
dw : Quantity
The d(wavelength) bin size for creating a grid
that is uniform in linear space.
wavelength : Quantity
An array of wavelength centers, if you just want to give
it an entirely custom array. The widths of the bins
will be guessed from the centers. It will do a good
job if the widths are constant, but don't 100% trust
it otherwise.
wavelength_edges : Quantity
An array of wavelengths for the edges of bins,
if you just want to give an entirely custom array.
The bins will span wavelength_edges[:-1]
to
wavelength_edges[1:]
, so the resulting binned
Rainbow
will have len(wavelength_edges) - 1
wavelength bins associated with it.
nwavelengths : int
A fixed number of wavelengths to bin together.
Binning will start from the 0th element of the
starting wavelengths; if you want to start from
a different index, trim before binning.
minimum_acceptable_ok : float
The numbers in the .ok
attribute express "how OK?" each
data point is, ranging from 0 (not OK) to 1 (super OK).
In most cases, .ok
will be binary, but there may be times
where it's intermediate (for example, if a bin was created
from some data that were not OK and some that were).
The minimum_acceptable_ok
parameter allows you to specify what
level of OK-ness for a point to go into the binning.
Reasonable options may include:
minimum_acceptable_ok = 1
Only data points that are perfectly OK
will go into the binning.
minimum_acceptable_ok = 1e-10
All data points that aren't definitely not OK
will go into the binning.
minimum_acceptable_ok = 0
All data points will be included in the bin.
minimum_points_per_bin : float
If you're creating bins that are smaller than those in
the original dataset, it's possible to end up with bins
that effectively contain fewer than one original datapoint
(in the sense that the contribution of one original datapoint
might be split across multiple new bins). By default,
we allow this behavior with minimum_points_per_bin=0
, but you can
limit your result to only bins that contain one or more
original datapoints with minimum_points_per_bin=1
.
trim : bool
Should any wavelengths or columns that end up
as entirely nan be trimmed out of the result?
(default = True)
starting_wavelengths : str
What wavelengths should be used as the starting
value from which we will be binning? Options include:
'1D' = (default) the shared 1D wavelengths for all times
stored in .wavelike['wavelength']
'2D' = (used only by align_wavelengths
) the per-time 2D array
stored in .fluxlike['wavelength']
[Most users probably don't need to change this from default.]
Returns#
binned : Rainbow
The binned Rainbow
.
Source code in chromatic/rainbows/actions/binning.py
426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 |
|
get_average_lightcurve_as_rainbow(self)
#
Produce a wavelength-integrated light curve.
The average across wavelengths is uncertainty-weighted.
This uses bin
, which is a horribly slow way of doing what is
fundamentally a very simply array calculation, because we
don't need to deal with partial pixels.
Returns#
lc : Rainbow
A Rainbow
object with just one wavelength.
Source code in chromatic/rainbows/actions/binning.py
686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 |
|
get_average_spectrum_as_rainbow(self)
#
Produce a time-integrated spectrum.
The average across times is uncertainty-weighted.
This uses bin
, which is a horribly slow way of doing what is
fundamentally a very simply array calculation, because we
don't need to deal with partial pixels.
Returns#
lc : Rainbow
A Rainbow
object with just one time.
Source code in chromatic/rainbows/actions/binning.py
711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 |
|
Compare this Rainbow
to others.
(still in development) This connects the current Rainbow
to a collection of other Rainbow
objects, which can then
be visualized side-by-side in a uniform way.
Parameters#
rainbows : list
A list containing one or more other Rainbow
objects.
If you only want to compare with one other Rainbow
,
supply it in a 1-element list like .compare([other])
Returns#
rainbow : MultiRainbow
A MultiRainbow
comparison object including all input Rainbow
s
Source code in chromatic/rainbows/actions/compare.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
Flag outliers as not ok
.
This examines the flux array, identifies significant outliers,
and marks them 0 in the ok
array. The default procedure is to use
a median filter to remove temporal trends (remove_trends
),
inflate the uncertainties based on the median-absolute-deviation
scatter (inflate_uncertainty
), and call points outliers if they
deviate by more than a certain number of sigma (how_many_sigma
)
from the median-filtered level.
The returned Rainbow
object should be identical to the input
one, except for the possibility that some elements in ok
array
will have been marked as zero. (The filtering or inflation are
not applied to the returned object.)
Parameters#
how_many_sigma : float, optional Standard deviations (sigmas) allowed for individual data points before they are flagged as outliers. remove_trends : bool, optional Should we remove trends from the flux data before trying to look for outliers? inflate_uncertainty : bool, optional Should uncertainties per wavelength be inflated to match the (MAD-based) standard deviation of the data?
Returns#
rainbow : Rainbow
A new Rainbow object with the outliers flagged as 0 in .ok
Source code in chromatic/rainbows/actions/flag_outliers.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
|
Fold this Rainbow
to a period and reference epoch.
This changes the times from some original time into a phased time, for example the time within an orbital period, relative to the time of mid-transit. This is mostly a convenience function for plotting data relative to mid-transit and/or trimming data based on orbital phase.
Parameters#
period : Quantity
The orbital period of the planet (with astropy units of time).
t0 : Quantity
Any mid-transit epoch (with astropy units of time).
event : str
A description of the event that happens periodically.
For example, you might want to switch this to
'Mid-Eclipse' (as well as offsetting the t0
by the
appropriate amount relative to transit). This description
may be used in plot labels.
Returns#
folded : Rainbow
The folded Rainbow
.
Source code in chromatic/rainbows/actions/fold.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
Inflate uncertainties to match observed scatter.
This is a quick and approximate tool for inflating
the flux uncertainties in a Rainbow
to match the
observed scatter. With defaults, this will estimate
the scatter using a robust median-absolute-deviation
estimate of the standard deviation (method='MAD'
),
applied to time-series from which temporal trends
have been removed (remove_trends=True
), and inflate
the uncertainties on a per-wavelength basis. The trend
removal, by default by subtracting off local medians
(remove_trends_method='median_filter'
), will squash
many types of both astrophysical and systematic trends,
so this function should be used with caution in
applicants where precise and reliable uncertainties
are needed.
Parameters#
method : string
What method to use to obtain measured scatter.
Current options are 'MAD', 'standard-deviation'.
remove_trends : bool
Should we remove trends before estimating by how
much we need to inflate the uncertainties?
remove_trends_method : str
What method should be used to remove trends?
See .remove_trends
for options.
remove_trends_kw : dict
What keyword arguments should be passed to remove_trends
?
minimum_inflate_ratio : float, optional
the minimum inflate_ratio that can be. We don't want people
to deflate uncertainty unless a very specific case of unstable
pipeline output.
Returns#
removed : Rainbow The Rainbow with estimated signals removed.
Source code in chromatic/rainbows/actions/inflate_uncertainty.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|
Inject uncorrelated random noise into the .flux
array.
This injects independent noise to each data point,
drawn from either a Gaussian or Poisson distribution.
If the inputs can be scalar, or they can be arrays
that we will try to broadcast into the shape of the
.flux
array.
Parameters#
float, array, optional
The signal-to-noise per wavelength per time. For example, S/N=100 would mean that the uncertainty on the flux for each each wavelength-time data point will be 1%. If it is a scalar, then even point is the same. If it is an array with a fluxlike, wavelike, or timelike shape it will be broadcast appropriately.
number_of_photons : float, array, optional
The number of photons expected to be recieved
from the light source per wavelength and time.
If it is a scalar, then even point is the same.
If it is an array with a fluxlike, wavelike,
or timelike shape it will be broadcast
appropriately.
If number_of_photons
is set, then signal_to_noise
will be ignored.
Returns#
rainbow : Rainbow
A new Rainbow
object with the noise injected.
Source code in chromatic/rainbows/actions/inject_noise.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
|
Inject some random outliers.
To approximately simulate cosmic rays or other rare weird outliers, this randomly injects outliers into a small fraction of pixels. For this simple method, outliers will have the same amplitude, either as a ratio above the per-data-point or as a fixed number (if no uncertainties exist).
Parameters#
fraction : float, optional The fraction of pixels that should get outliers. (default = 0.01) amplitude : float, optional If uncertainty > 0, how many sigma should outliers be? If uncertainty = 0, what number should be injected? (default = 10)
Returns#
rainbow : Rainbow
A new Rainbow
object with outliers injected.
Source code in chromatic/rainbows/actions/inject_outliers.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
Inject a stellar spectrum into the flux.
This injects a constant stellar spectrum into
all times in the Rainbow
. Injection happens
by multiplying the .model
flux array, so for
example a model that already has a transit in
it will be scaled up to match the stellar spectrum
in all wavelengths.
Parameters#
temperature : Quantity, optional
Temperature, in K (with no astropy units attached).
logg : float, optional
Surface gravity log10[g/(cm/s**2)] (with no astropy units attached).
metallicity : float, optional
Metallicity log10[metals/solar] (with no astropy units attached).
radius : Quantity, optional
The radius of the star.
distance : Quantity, optional
The distance to the star.
phoenix : bool, optional
If True
, use PHOENIX surface flux.
If False
, use Planck surface flux.
Returns#
rainbow : Rainbow
A new Rainbow
object with the spectrum injected.
Source code in chromatic/rainbows/actions/inject_spectrum.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
|
Inject some (very cartoony) instrumental systematics.
Here's the basic procedure:
1) Generate some fake variables that vary either just with
wavelength, just with time, or with both time and wavelength.
Store these variables for later use. For example, these might
represent an average x
and y
centroid of the trace on the
detector (one for each time), or the background flux associated
with each wavelength (one for each time and for each wavelength).
2) Generate a flux model as some function of those variables. In reality, we probably don't know the actual relationship between these inputs and the flux, but we can imagine one!
3) Inject the model flux into the flux
of this Rainbow,
and store the combined model in systematics-model
and
each individual component in systematic-model-{...}
.
Parameters#
amplitude : float, optional The (standard deviation-ish) amplitude of the systematics in units normalized to 1. For example, an amplitude of 0.003 will produce systematic trends that tend to range (at 1 sigma) from 0.997 to 1.003. wavelike : list of strings, optional A list of wave-like cotrending quantities to serve as ingredients to a linear combination systematics model. Existing quantities will be pulled from the appropriate core dictionary; fake data will be created for quantities that don't already exist, from a cartoony Gaussian process model. timelike : list of strings, optional A list of time-like cotrending quantities to serve as ingredients to a linear combination systematics model. Existing quantities will be pulled from the appropriate core dictionary; fake data will be created for quantities that don't already exist, from a cartoony Gaussian process model. fluxlike : list of strings, optional A list of flux-like cotrending quantities to serve as ingredients to a linear combination systematics model. Existing quantities will be pulled from the appropriate core dictionary; fake data will be created for quantities that don't already exist, from a cartoony Gaussian process model.
Returns#
rainbow : Rainbow A new Rainbow object with the systematics injected.
Source code in chromatic/rainbows/actions/inject_systematics.py
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 |
|
Simulate a wavelength-dependent planetary transit.
This uses one of a few methods to inject a transit
signal into the Rainbow
, allowing the transit
depth to change with wavelength (for example due to a
planet's effective radius changing with wavelength due
to its atmospheric transmission spectrum). Other
parameters can also be wavlength-dependent, but
some (like period, inclination, etc...) probably
shouldn't be.
The current methods include:
'trapezoid'
to inject a cartoon transit, using nomenclature
from Winn (2010).
This is the default method, to avoid package dependencies
that can be finicky to compile and/or install on different
operating systems.
'exoplanet'
to inject a limb-darkened transit using exoplanet-core.
This option requires exoplanet-core
be installed,
but it doesn't require complicated dependencies or
compiling steps, so it's already included as a dependency.
'batman'
to inject a limb-darkened transit using batman-package
This method requires that batman-package
be installed,
and it will try to throw a helpful warning message if
it's not.
Parameters#
planet_radius : float, array, None
The planet-to-star radius ratio = [transit depth]0.5,
which can be either a single value for all wavelengths,
or an array with one value for each wavelength.
method : str
What method should be used to inject the transits? Different
methods will produce different results and have different options.
The currently implement options are 'trapezoid'
and 'batman'
.
transit_parameters : dict
All additional keywords will be passed to the transit model.
The accepted keywords for the different methods are as follows.
'trapezoid'
accepts the following keyword arguments:
delta
= The depth of the transit, as a fraction of the out-of-transit flux (default 0.01)
(If not provided, it will be set by planet_radius
.)
P
= The orbital period of the planet, in days (default 3.0)
t0
= Mid-transit time of the transit, in days (default 0.0)
T
= The duration of the transit (from mid-ingress to mid-egress), in days (default 0.1)
tau
= The duration of ingress/egress, in days (default 0.01)
baseline
= The baseline, out-of-transit flux level (default 1.0)
'exoplanet-core'
accepts the following keyword arguments:
rp
= (planet radius)/(star radius), unitless (default 0.1)
(If not provided, it will be set by planet_radius
.)
t0
= Mid-transit time of the transit, in days (default 0.0)
per
= The orbital period of the planet, in days (default 3.0)
a
= (semi-major axis)/(star radius), unitless (default 10)
inc
= The orbital inclination, in degrees (default 90)
ecc
= The orbital eccentricity, unitless (default 0.0)
w
= The longitude of periastron, in degrees (default 0.0)
u
= The quadratic limb-darkening coefficients (default [0.2, 0.2])
These coefficients can only be a 2D array of the form (n_wavelengths, n_coefficients) where
each row is the set of limb-darkening coefficients corresponding
to a single wavelength
'batman'
accepts the following keyword arguments:
rp
= (planet radius)/(star radius), unitless (default 0.1)
(If not provided, it will be set by planet_radius
.)
t0
= Mid-transit time of the transit, in days (default 0.0)
per
= The orbital period of the planet, in days (default 1.0)
a
= (semi-major axis)/(star radius), unitless (default 10)
inc
= The orbital inclination, in degrees (default 90)
ecc
= The orbital eccentricity, unitless (default 0.0)
w
= The longitude of periastron, in degrees (default 0.0)
limb_dark
= The limb-darkening model (default "quadratic"), possible
values described in more detail in batman documentation.
u
= The limb-darkening coefficients (default [0.2, 0.2])
These coefficients can be:
-one value (if limb-darkening law requires only one value)
-a 1D list/array of coefficients for constant limb-darkening
-a 2D array of the form (n_wavelengths, n_coefficients) where
each row is the set of limb-darkening coefficients corresponding
to a single wavelength
Note that this currently does not calculate the appropriate
coefficient vs wavelength variations itself; there exist codes
(such as hpparvi/PyLDTk and nespinoza/limb-darkening) which
can be used for this.
Source code in chromatic/rainbows/actions/inject_transit.py
324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 |
|
normalize(self, axis='wavelength', percentile=50)
#
Normalize by dividing through by the median spectrum and/or lightcurve.
This normalizes a Rainbow
by estimating dividing
through by a wavelength-dependent normalization. With
default inputs, this would normalize each wavelength
to have flux values near 1, to make it easier to see
differences across time (such as a transit or eclipse).
This function could also be used to divide through by
a median light curve, to make it easier to see variations
across wavelength.
Parameters#
axis : str
The axis that should be normalized out.
w
or wave
or wavelength
will divide out the typical spectrum.
t
or time
will divide out the typical light curve
float
A number between 0 and 100, specifying the percentile
of the data along an axis to use as the reference.
The default of percentile=50
corresponds to the median.
If you want to normalize to out-of-transit, maybe you
want a higher percentile. If you want to normalize to
the baseline below a flare, maybe you want a lower
percentage.
Returns#
normalized : Rainbow The normalized Rainbow.
Source code in chromatic/rainbows/actions/normalization.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
|
__add__(self, other)
#
Add the flux of a rainbow and an input array (or another rainbow) and output in a new rainbow other.
Parameters#
other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.
Returns#
rainbow : Rainbow
A new Rainbow
with the mathematical operation applied.
Source code in chromatic/rainbows/actions/operations.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
__eq__(self, other)
#
Test whether self == other
.
This compares the wavelike, timelike, and fluxlike arrays for exact matches. It skips entirely over the metadata.
Parameters#
other : Rainbow
Another Rainbow
to compare to.
Returns#
equal : bool Are they (effectively) equivalent?
Source code in chromatic/rainbows/actions/operations.py
298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 |
|
__mul__(self, other)
#
Multiply the flux of a rainbow and an input array (or another rainbow) and output in a new rainbow other.
Parameters#
other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.
Returns#
rainbow : Rainbow
A new Rainbow
with the mathematical operation applied.
Source code in chromatic/rainbows/actions/operations.py
231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
|
__sub__(self, other)
#
Subtract the flux of a rainbow from an input array (or another rainbow) and output in a new rainbow other.
Parameters#
other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.
Returns#
rainbow : Rainbow
A new Rainbow
with the mathematical operation applied.
Source code in chromatic/rainbows/actions/operations.py
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
|
__truediv__(self, other)
#
Divide the flux of a rainbow and an input array (or another rainbow) and output in a new rainbow other.
Parameters#
other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.
Returns#
rainbow : Rainbow
A new Rainbow
with the mathematical operation applied.
Source code in chromatic/rainbows/actions/operations.py
264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 |
|
diff(self, other)
#
Test whether self == other
, and print the differences.
This compares the wavelike, timelike, and fluxlike arrays
for exact matches. It skips entirely over the metadata.
The diff
function is the same as __eq__
, but a little
more verbose, just to serve as a helpful debugging tool.
Parameters#
other : Rainbow
Another Rainbow
to compare to.
Returns#
equal : bool Are they (effectively) equivalent?
Source code in chromatic/rainbows/actions/operations.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 |
|
A quick tool to approximately remove trends.
This function provides some simple tools for kludgily
removing trends from a Rainbow
, through a variety of
filtering methods. If you just want to remove all
slow trends, whether astrophysical or instrumental,
options like the median_filter
or savgol_filter
will effectively suppress all trends on timescales
longer than their filtering window. If you want a
more restricted approach to removing long trends,
the polyfit
option allows you to fit out slow trends.
Parameters#
method : str, optional
What method should be used to make an approximate model
for smooth trends that will then be subtracted off?
differences
will do an extremely rough filtering
of replacing the fluxes with their first differences.
Trends that are smooth relative to the noise will
be removed this way, but sharp features will remain.
Required keywords:
None.
median_filter
is a wrapper for scipy.signal.median_filter.
It smoothes each data point to the median of its surrounding
points in time and/or wavelength. Required keywords:
size
= centered on each point, what shape rectangle
should be used to select surrounding points for median?
The dimensions are (nwavelengths, ntimes), so size=(3,7)
means we'll take the median across three wavelengths and
seven times. Default is (1,5)
.
savgol_filter
is a wrapper for scipy.signal.savgol_filter.
It applies a Savitzky-Golay filter for polynomial smoothing.
Required keywords:
window_length
= the length of the filter window,
which must be a positive odd integer. Default is 5
.
polyorder
= the order of the polynomial to use.
Default is 2
.
polyfit
is a wrapper for numpy.polyfit to use a weighted
linear least squares polynomial fit to remove smooth trends
in time. Required keywods:
deg
= the polynomial degree, which must be a positive
integer. Default is 1
, meaning a line.
custom
allow users to pass any fluxlike array of model
values for an astrophysical signal to remove it. Required
keywords:
model
= the (nwavelengths, ntimes) model array
**kw : dict, optional
Any additional keywords will be passed to the function
that does the filtering. See method
keyword for options.
Returns#
removed : Rainbow The Rainbow with estimated signals removed.
Source code in chromatic/rainbows/actions/remove_trends.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
Doppler shift the wavelengths of this Rainbow
.
This shifts the wavelengths in a Rainbow
by
applying a velocity shift. Positive velocities make
wavelengths longer (redshift); negative velocities make
wavelengths shorter (bluesfhit).
Parameters#
velocity : Quantity The systemic velocity by which we should shift, with units of velocity (for example, u.km/u.s)
Source code in chromatic/rainbows/actions/shift.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
Trim away bad wavelengths and/or times.
If entire wavelengths or times are marked as not ok
,
we can probably remove them to simplify calculations
and visualizations. This function will trim those away,
by default only removing problem rows/columns on the ends,
to maintain a contiguous block.
Parameters#
t_min : u.Quantity
The minimum time to keep.
t_max : u.Quantity
The maximum time to keep.
w_min : u.Quantity
The minimum wavelength to keep.
w_max : u.Quantity
The maximum wavelength to keep.
just_edges : bool, optional
Should we only trim the outermost bad wavelength bins?
True
= Just trim off the bad edges and keep
interior bad values. Keeping interior data, even if
they're all bad, often helps to make for more
intuititive imshow plots.
False
= Trim off every bad wavelength, whether it's on
the edge or somewhere in the middle of the dataset.
The resulting Rainbow will be smaller, but it might
be a little tricky to visualize with imshow.
when_to_give_up : float, optional
The fraction of times that must be nan or not OK
for the entire wavelength to be considered bad (default = 1).
1.0
= trim only if all times are bad
0.5
= trim if more than 50% of times are bad
0.0
= trim if any times are bad
minimum_acceptable_ok : float, optional
The numbers in the .ok
attribute express "how OK?" each
data point is, ranging from 0 (not OK) to 1 (super OK).
In most cases, .ok
will be binary, but there may be times
where it's intermediate (for example, if a bin was created
from some data that were not OK and some that were).
The minimum_acceptable_ok
parameter allows you to specify what
level of OK-ness for a point to not get trimmed.
Returns#
trimmed : Rainbow
The trimmed Rainbow
.
Source code in chromatic/rainbows/actions/trim.py
171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 |
|
🌈 Get/Timelike#
get_average_lightcurve(self)
#
Return a lightcurve of the star, averaged over all wavelengths.
This uses bin
, which is a horribly slow way of doing what is
fundamentally a very simply array calculation, because we
don't need to deal with partial pixels.
Returns#
lightcurve : array Timelike array of fluxes.
Source code in chromatic/rainbows/get/timelike/average_lightcurve.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
get_median_lightcurve(self)
#
Return a lightcurve of the star, medianed over all wavelengths.
Returns#
median_lightcurve : array Timelike array of fluxes.
Source code in chromatic/rainbows/get/timelike/median_lightcurve.py
6 7 8 9 10 11 12 13 14 15 16 17 |
|
get_for_time(self, i, quantity='flux')
#
Get 'quantity'
associated with time 'i'
.
Parameters#
i : int The time index to retrieve. quantity : string The quantity to retrieve. If it is flux-like, column 'i' will be returned. If it is wave-like, the array itself will be returned.
Returns#
quantity : array, Quantity The 1D array of 'quantity' corresponding to time 'i'.
Source code in chromatic/rainbows/get/timelike/subset.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
get_ok_data_for_time(self, i, x='wavelength', y='flux', sigma='uncertainty', minimum_acceptable_ok=1, express_badness_with_uncertainty=False)
#
A small wrapper to get the good data from a time.
Extract a slice of data, marking data that are not ok
either
by trimming them out entirely or by inflating their
uncertainties to infinity.
Parameters#
i : int
The time index to retrieve.
x : string, optional
What quantity should be retrieved as 'x'? (default = 'time')
y : string, optional
What quantity should be retrieved as 'y'? (default = 'flux')
sigma : string, optional
What quantity should be retrieved as 'sigma'? (default = 'uncertainty')
minimum_acceptable_ok : float, optional
The smallest value of ok
that will still be included.
(1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
express_badness_with_uncertainty : bool, optional
If False, data that don't pass the ok
cut will be removed.
If True, data that don't pass the ok
cut will have their
uncertainties inflated to infinity (np.inf).
Returns#
x : array
The time.
y : array
The desired quantity (default is flux
)
sigma : array
The uncertainty on the desired quantity
Source code in chromatic/rainbows/get/timelike/subset.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
get_times_as_astropy(self, time=None, format=None, scale=None, is_barycentric=None)
#
Convert times from a Rainbow
into an astropy Time
object.
Parameters#
time : Quantity, optional
The time-like Quantity to be converted.
If None (default), convert the time values in self.time
If another time-like Quantity, convert those values.
format : str, optional
The time format to supply to astropy.time.Time.
If None (default), format will be pulled from
self.metadata['time_details']['format']
scale : str, optional
The time scale to supply to astropy.time.Time.
If None (default), scale will be pulled from
self.metadata['time_details']['scale']
is_barycentric : bool, optional
Are the times already measured relative to the
Solar System barycenter? This is mostly for warning
the user that it's not.
If None
(default), is_barycentric
will be pulled from
self.metadata['time_details']['is_barycentric']
Returns#
astropy_time : Time
The times as an astropy Time
object.
Source code in chromatic/rainbows/get/timelike/time.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
|
set_times_from_astropy(self, astropy_time, is_barycentric=None)
#
Set the times for this Rainbow
from an astropy Time
object.
Parameters#
astropy_time : Time
The times as an astropy Time
object.
is_barycentric : bool, optional
Are the times already measured relative to the
Solar System barycenter? This is mostly for warning
the user that it's not. Options are True, False,
None (= don't know).
Returns#
time : Quantity
An astropy Quantity with units of time,
expressing the Time as julian day.
In addition to this returned variable,
the function sets the following internal
variables:
self.time # (= the astropy Quantity of times)
self.metadata['time_format'] # (= the format to convert back to Time)
self.metadata['time_scale'] # (= the scale to convert back to Time)
self.metadata['time_is_barycentric'] # (= is it barycentric?)
Source code in chromatic/rainbows/get/timelike/time.py
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
|
🌈 Get/Wavelike#
get_average_spectrum(self)
#
Return a average_spectrum of the star, averaged over all times.
This uses bin
, which is a horribly slow way of doing what is
fundamentally a very simply array calculation, because we
don't need to deal with partial pixels.
Returns#
average_spectrum : array Wavelike array of average spectrum.
Source code in chromatic/rainbows/get/wavelike/average_spectrum.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
get_expected_uncertainty(self, function=np.nanmedian, *args, **kw)
#
Get the typical per-wavelength uncertainty.
Parameters#
function : function, optional
What function should be used to choose the "typical"
value for each wavelength? Good options are probably
things like np.nanmedian
, np.median
, np.nanmean
np.mean
, np.percentile
args : list, optional
Addition arguments will be passed to function
*kw : dict, optional
Additional keyword arguments will be passed to function
Returns#
uncertainty_per_wavelength : array The uncertainty associated with each wavelength.
Source code in chromatic/rainbows/get/wavelike/expected_uncertainty.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
get_measured_scatter_in_bins(self, ntimes=2, nbins=4, method='standard-deviation', minimum_acceptable_ok=1e-10)
#
Get measured scatter in time bins of increasing sizes.
For uncorrelated Gaussian noise, the scatter should decrease as 1/sqrt(N), where N is the number points in a bin. This function calculates the scatter for a range of N, thus providing a quick test for correlated noise.
Parameters#
ntimes : int
How many times should be binned together? Binning will
continue recursively until fewer that nbins would be left.
nbins : int
What's the smallest number of bins that should be used to
calculate a scatter? The absolute minimum is 2.
method : string
What method to use to obtain measured scatter. Current options are 'MAD', 'standard-deviation'.
minimum_acceptable_ok : float
The smallest value of ok
that will still be included.
(1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
Returns#
scatter_dictionary : dict Dictionary with lots of information about scatter in bins per wavelength.
Source code in chromatic/rainbows/get/wavelike/measured_scatter_in_bins.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
get_measured_scatter(self, quantity='flux', method='standard-deviation', minimum_acceptable_ok=1e-10)
#
Get measured scatter for each wavelength.
Calculate the standard deviation (or outlier-robust equivalent) for each wavelength, which can be compared to the expected per-wavelength uncertainty.
Parameters#
quantity : string, optional
The fluxlike
quantity for which we should calculate the scatter.
method : string, optional
What method to use to obtain measured scatter.
Current options are 'MAD', 'standard-deviation'.
minimum_acceptable_ok : float, optional
The smallest value of ok
that will still be included.
(1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
Returns#
scatter : array Wavelike array of measured scatters.
Source code in chromatic/rainbows/get/wavelike/measured_scatter.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
get_median_spectrum(self)
#
Return a spectrum of the star, medianed over all times.
Returns#
median_spectrum : array Wavelike array of fluxes.
Source code in chromatic/rainbows/get/wavelike/median_spectrum.py
6 7 8 9 10 11 12 13 14 15 16 17 |
|
get_spectral_resolution(self, pixels_per_resolution_element=1)
#
Estimate the R=w/dw spectral resolution.
Higher spectral resolutions correspond to more wavelength
points within a particular interval. By default, it's
estimated for the interval between adjacent wavelength
bins. In unbinned data coming directly from a telescope,
there's a good chance that adjacent pixels both sample
the same resolution element as blurred by the telescope
optics, so the pixels_per_resolution_element
keyword
should likely be larger than 1.
Parameters#
pixels_per_resolution_element : float, optional How many pixels do we consider as a resolution element?
Returns#
R : array The spectral resolution at each wavelength.
Source code in chromatic/rainbows/get/wavelike/spectral_resolution.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
get_for_wavelength(self, i, quantity='flux')
#
Get 'quantity'
associated with wavelength 'i'
.
Parameters#
i : int The wavelength index to retrieve. quantity : string The quantity to retrieve. If it is flux-like, row 'i' will be returned. If it is time-like, the array itself will be returned.
Returns#
quantity : array, Quantity The 1D array of 'quantity' corresponding to wavelength 'i'.
Source code in chromatic/rainbows/get/wavelike/subset.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
get_ok_data_for_wavelength(self, i, x='time', y='flux', sigma='uncertainty', minimum_acceptable_ok=1, express_badness_with_uncertainty=False)
#
A small wrapper to get the good data from a wavelength.
Extract a slice of data, marking data that are not ok
either
by trimming them out entirely or by inflating their
uncertainties to infinity.
Parameters#
i : int
The wavelength index to retrieve.
x : string, optional
What quantity should be retrieved as 'x'? (default = 'time')
y : string, optional
What quantity should be retrieved as 'y'? (default = 'flux')
sigma : string, optional
What quantity should be retrieved as 'sigma'? (default = 'uncertainty')
minimum_acceptable_ok : float, optional
The smallest value of ok
that will still be included.
(1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
express_badness_with_uncertainty : bool, optional
If False, data that don't pass the ok
cut will be removed.
If True, data that don't pass the ok
cut will have their
uncertainties inflated to infinity (np.inf).
Returns#
x : array
The time.
y : array
The desired quantity (default is flux
)
sigma : array
The uncertainty on the desired quantity
Source code in chromatic/rainbows/get/wavelike/subset.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
🌈 Visualizations#
animate_lightcurves(self, filename='animated-lightcurves.gif', fps=5, dpi=None, bitrate=None, **kwargs)
#
Create an animation to show how the lightcurve changes as we flip through every wavelength.
Parameters#
filename : str
Name of file you'd like to save results in.
Currently supports only .gif or .html files.
fps : float
frames/second of animation
ax : Axes
The axes into which this animated plot should go.
xlim : tuple
Custom xlimits for the plot
ylim : tuple
Custom ylimits for the plot
cmap : str,
The color map to use for expressing wavelength
vmin : Quantity
The minimum value to use for the wavelength colormap
vmax : Quantity
The maximum value to use for the wavelength colormap
scatterkw : dict
A dictionary of keywords to be passed to plt.scatter
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[s, c, marker, alpha, linewidths, edgecolors, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
textkw : dict
A dictionary of keywords passed to plt.text
so you can have more detailed control over the text
appearance. Common keyword arguments might include:
[alpha, backgroundcolor, color, fontfamily, fontsize,
fontstyle, fontweight, rotation, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
Source code in chromatic/rainbows/visualizations/animate.py
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 |
|
animate_spectra(self, filename='animated-spectra.gif', fps=5, dpi=None, bitrate=None, **kw)
#
Create an animation to show how the spectrum changes as we flip through every timepoint.
Parameters#
filename : str
Name of file you'd like to save results in.
Currently supports only .gif files.
ax : Axes
The axes into which this animated plot should go.
fps : float
frames/second of animation
xlim : tuple
Custom xlimits for the plot
ylim : tuple
Custom ylimits for the plot
cmap : str,
The color map to use for expressing wavelength
vmin : Quantity
The minimum value to use for the wavelength colormap
vmax : Quantity
The maximum value to use for the wavelength colormap
scatterkw : dict
A dictionary of keywords to be passed to plt.scatter
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[s, c, marker, alpha, linewidths, edgecolors, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
textkw : dict
A dictionary of keywords passed to plt.text
so you can have more detailed control over the text
appearance. Common keyword arguments might include:
[alpha, backgroundcolor, color, fontfamily, fontsize,
fontstyle, fontweight, rotation, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
Source code in chromatic/rainbows/visualizations/animate.py
400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 |
|
get_wavelength_color(self, wavelength)
#
Determine the color corresponding to one or more wavelengths.
Parameters#
wavelength : Quantity The wavelength value(s), either an individual wavelength or an array of N wavelengths.
Returns#
colors : array An array of RGBA colors [or an (N,4) array].
Source code in chromatic/rainbows/visualizations/colors.py
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
setup_wavelength_colors(self, cmap=None, vmin=None, vmax=None, log=None)
#
Set up a color map and normalization function for colors datapoints by their wavelengths.
Parameters#
cmap : str, Colormap The color map to use. vmin : Quantity The wavelength at the bottom of the cmap. vmax : Quantity The wavelength at the top of the cmap. log : bool If True, colors will scale with log(wavelength). If False, colors will scale with wavelength. If None, the scale will be guessed from the internal wscale.
Source code in chromatic/rainbows/visualizations/colors.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
Paint a 2D image of flux as a function of time and wavelength,
using plt.imshow
where pixels will have constant size.
Parameters#
ax : Axes, optional
The axes into which to make this plot.
quantity : str, optional
The fluxlike quantity to imshow.
(Must be a key of rainbow.fluxlike
).
w_unit : str, Unit, optional
The unit for plotting wavelengths.
t_unit : str, Unit, optional
The unit for plotting times.
colorbar : bool, optional
Should we include a colorbar?
aspect : str, optional
What aspect ratio should be used for the imshow?
mask_ok : bool, optional
Should we mark which data are not OK?
color_ok : str, optional
The color to be used for masking data points that are not OK.
alpha_ok : float, optional
The transparency to be used for masking data points that are not OK.
use_pcolormesh : bool
If the grid is non-uniform, should jump to using pcolormesh
instead?
Leaving this at the default of True will give the best chance of
having real Wavelength and Time axes; setting it to False will
end up showing Wavelength Index or Time Index instead (if non-uniform).
**kw : dict, optional
All other keywords will be passed on to plt.imshow
,
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[cmap, norm, interpolation, alpha, vmin, vmax]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html
Source code in chromatic/rainbows/visualizations/imshow.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
|
imshow_interact(self, quantity='Flux', t_unit='d', w_unit='micron', cmap='viridis', ylim=[], ylog=None, filename=None)
#
Display interactive spectrum plot for chromatic Rainbow with a wavelength-averaged 2D quantity defined by the user. The user can interact with the 3D spectrum to choose the wavelength range over which the average is calculated.
Parameters#
self : Rainbow object
chromatic Rainbow object to plot
quantity : str
(optional, default='flux')
The quantity to imshow, currently either flux
or uncertainty
ylog : boolean
(optional, default=None)
Boolean for whether to take log10 of the y-axis data.
If None, will be guessed from the data.
t_unit : str
(optional, default='d')
The time unit to use (seconds, minutes, hours, days etc.)
w_unit : str
(optional, default='micron')
The wavelength unit to use
cmap : str
(optional, default='viridis')
The color scheme to use from Vega documentation
ylim : list
(optional, default=[])
If the user wants to define their own ylimits on the lightcurve plot
Source code in chromatic/rainbows/visualizations/interactive.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
|
Paint a 2D image of flux as a function of time and wavelength.
By using .pcolormesh
, pixels can transform based on their edges,
so non-uniform axes are allowed. This is a tiny bit slower than
.imshow
, but otherwise very similar.
Parameters#
ax : Axes, optional
The axes into which to make this plot.
quantity : str, optional
The fluxlike quantity to imshow.
(Must be a key of rainbow.fluxlike
).
w_unit : str, Unit, optional
The unit for plotting wavelengths.
t_unit : str, Unit, optional
The unit for plotting times.
colorbar : bool, optional
Should we include a colorbar?
mask_ok : bool, optional
Should we mark which data are not OK?
color_ok : str, optional
The color to be used for masking data points that are not OK.
alpha_ok : float, optional
The transparency to be used for masking data points that are not OK.
**kw : dict, optional
All other keywords will be passed on to plt.pcolormesh
,
so you can have more detailed control over the plot
appearance. Common keyword argumentsvli might include:
[cmap, norm, alpha, vmin, vmax]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html
Source code in chromatic/rainbows/visualizations/pcolormesh.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
|
Plot flux as sequence of offset light curves.
Parameters#
ax : Axes, optional
The axes into which to make this plot.
spacing : None, float, optional
The spacing between light curves.
(Might still change how this works.)
None uses half the standard dev of entire flux data.
w_unit : str, Unit, optional
The unit for plotting wavelengths.
t_unit : str, Unit, optional
The unit for plotting times.
cmap : str, Colormap, optional
The color map to use for expressing wavelength.
vmin : Quantity, optional
The minimum value to use for the wavelength colormap.
vmax : Quantity, optional
The maximum value to use for the wavelength colormap.
errorbar : boolean, optional
Should we plot errorbars?
text : boolean, optional
Should we label each lightcurve?
minimum_acceptable_ok : float
The smallest value of ok
that will still be included.
(1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
plotkw : dict, optional
A dictionary of keywords passed to plt.plot
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[alpha, clip_on, zorder, marker, markersize,
linewidth, linestyle, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html
errorbarkw : dict, optional
A dictionary of keywords passed to plt.errorbar
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[alpha, elinewidth, color, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.errorbar.html
textkw : dict, optional
A dictionary of keywords passed to plt.text
so you can have more detailed control over the text
appearance. Common keyword arguments might include:
[alpha, backgroundcolor, color, fontfamily, fontsize,
fontstyle, fontweight, rotation, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
**kw : dict, optional
Any additional keywords will be stored as kw
.
Nothing will happen with them.
Source code in chromatic/rainbows/visualizations/plot_lightcurves.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
|
Plot flux as sequence of offset spectrum.
Parameters#
ax : Axes
The axes into which to make this plot.
spacing : None, float
The spacing between light curves.
(Might still change how this works.)
None uses half the standard dev of entire flux data.
w_unit : str, Unit
The unit for plotting wavelengths.
t_unit : str, Unit
The unit for plotting times.
cmap : str, Colormap
The color map to use for expressing wavelength.
vmin : Quantity
The minimum value to use for the wavelength colormap.
vmax : Quantity
The maximum value to use for the wavelength colormap.
errorbar : boolean
Should we plot errorbars?
text : boolean
Should we label each spectrum?
minimum_acceptable_ok : float
The smallest value of ok
that will still be included.
(1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
scatterkw : dict
A dictionary of keywords passed to plt.scatter
so you can have more detailed control over the text
appearance. Common keyword arguments might include:
[alpha, color, s, m, edgecolor, facecolor]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
errorbarkw : dict
A dictionary of keywords passed to plt.errorbar
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[alpha, elinewidth, color, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.errorbar.html
plotkw : dict
A dictionary of keywords passed to plt.plot
so you can have more detailed control over the plot
appearance. Common keyword arguments might include:
[alpha, clip_on, zorder, marker, markersize,
linewidth, linestyle, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html
textkw : dict
A dictionary of keywords passed to plt.text
so you can have more detailed control over the text
appearance. Common keyword arguments might include:
[alpha, backgroundcolor, color, fontfamily, fontsize,
fontstyle, fontweight, rotation, zorder]
(and more)
More details are available at
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
**kw : dict
Any additional keywords will be stored as kw
.
Nothing will happen with them.
Source code in chromatic/rainbows/visualizations/plot_spectra.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
Plot flux either as a sequence of offset lightcurves (default) or as a sequence of offset spectra.
Parameters#
xaxis : string
What should be plotted on the x-axis of the plot?
'time' will plot a different light curve for each wavelength
'wavelength' will plot a different spectrum for each timepoint
**kw : dict
All other keywords will be passed along to either
.plot_lightcurves
or .plot_spectra
as appropriate.
Please see the docstrings for either of those functions
to figure out what keyword arguments you might want to
provide here.
Source code in chromatic/rainbows/visualizations/plot.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
🔨 Tools#
Calculate the surface flux from a thermally emitted surface, according to Planck function, in units of photons/(s * m**2 * nm).
Parameters#
temperature : Quantity
The temperature of the thermal emitter,
with units of K.
wavelength : Quantity, optional
The wavelengths at which to calculate,
with units of wavelength.
R : float, optional
The spectroscopic resolution for creating a log-uniform
grid that spans the limits set by wlim
, only if
wavelength
is not defined.
wlim : Quantity, optional
The two-element [lower, upper] limits of a wavelength
grid that would be populated with resolution R
, only if
wavelength
is not defined.
**kw : dict, optional
Other keyword arguments will be ignored.
Returns#
photons : Quantity The surface flux in photon units
This evaluates the Planck function at the exact wavelength values; it doesn't do anything fancy to integrate over binwidths, so if you're using very wide (R~a few) bins your integrated fluxes will be messed up.
Source code in chromatic/spectra/planck.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
Get a PHOENIX model spectrum for an arbitrary temperature, logg, metallicity.
Calculate the surface flux from a thermally emitted surface, according to PHOENIX model spectra, in units of photons/(s * m**2 * nm).
Parameters#
temperature : float, optional
Temperature, in K (with no astropy units attached).
logg : float, optional
Surface gravity log10[g/(cm/s**2)] (with no astropy units attached).
metallicity : float, optional
Metallicity log10[metals/solar] (with no astropy units attached).
R : float, optional
Spectroscopic resolution (lambda/dlambda). Currently, this must
be in one of [3,10,30,100,300,1000,3000,10000,30000,100000], but
check back soon for custom wavelength grids. There is extra
overhead associated with switching resolutions, so if you're
going to retrieve many spectra, try to group by resolution.
(If you're using the wavelength
or wavelength_edges
option
below, please be ensure your requested R exceeds that needed
to support your wavelengths.)
wavelength : Quantity, optional
A grid of wavelengths on which you would like your spectrum.
If this is None, the complete wavelength array will be returned
at your desired resolution. Otherwise, the spectrum will be
returned exactly at those wavelengths. Grid points will be
cached for this new wavelength grid to speed up applications
that need to retreive lots of similar spectra for the same
wavelength (like many optimization or sampling problems).
wavelength_edges : Quantity, optional
Same as wavelength
(see above!) but defining the wavelength
grid by its edges instead of its centers. The returned spectrum
will have 1 fewer element than wavelength_edges
.
Returns#
wavelength : Quantity The wavelengths, at the specified resolution. photons : Quantity The surface flux in photon units
Source code in chromatic/spectra/phoenix.py
1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 |
|
Tools for resampling array from one grid of independent variables to another.
bintoR(x, y, unc=None, R=50, xlim=None, weighting='inversevariance', drop_nans=True)
#
Bin any x and y array onto a logarithmicly uniform grid.
Parameters#
x : array
The original independent variable.
(For a spectrum example = wavelength)
y : array
The original dependent variable (same size as x).
(For a spectrum example = flux)
unc : array, None, optional
The unceratinty on the dependent variable
(For a spectrum example = the flux uncertainty)
R : array, optional
The spectral resolution R=x/dx for creating a new,
logarithmically uniform grid that starts at the first
value of x.
xlim : list, array, optional
A two-element list indicating the min and max values of
x for the new logarithmically spaced grid. If None,
these limits will be created from the data themselves
weighting : str, optional
How should we weight values when averaging
them together into one larger bin?
weighting = 'inversevariance'
weights = 1/unc**2
weighting = {literally anything else}
uniform weights
This will have no impact if unc == None
, or for any
new bins that effectively overlap less than one original
unbinned point.
drop_nans : bool, optional
Should we skip any bins turn out to be nans?
This most often happens when bins are empty.
Returns#
result : dict
A dictionary containing at least...
x
= the center of the output grid
y
= the resampled value on the output grid
x_edge_lower
= the lower edges of the output grid
x_edge_upper
= the upper edges of the output grid
...and possibly also
uncertainty
= the calculated uncertainty per bin
Source code in chromatic/resampling.py
666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 |
|
bintogrid(x=None, y=None, unc=None, newx=None, newx_edges=None, dx=None, nx=None, weighting='inversevariance', drop_nans=True, x_edges=None, visualize=False)
#
Bin any x and y array onto a linearly uniform grid.
Parameters#
x : array
The original independent variable.
(For a spectrum example = wavelength)
y : array
The original dependent variable (same size as x).
(For a spectrum example = flux)
unc : array, None
The unceratinty on the dependent variable
(For a spectrum example = the flux uncertainty)
nx : array
The number of bins from the original grid to
bin together into the new one.
dx : array
The fixed spacing for creating a new, linearly uniform
grid that start at the first value of x. This will
be ignored if newx
!= None.
newx : array
A new custom grid onto which we should bin.
newx_edges : array
The edges of the new grid of bins for the independent
variable, onto which you want to resample the y
values. The left and right edges of the bins will be,
respectively, newx_edges[:-1]
and newx_edges[1:]
,
so the size of the output array will be
len(newx_edges) - 1
weighting : str
How should we weight values when averaging
them together into one larger bin?
weighting = 'inversevariance'
weights = 1/unc**2
weighting = {literally anything else}
uniform weights
This will have no impact if unc == None
, or for any
new bins that effectively overlap less than one original
unbinned point.
drop_nans : bool
Should we skip any bins turn out to be nans?
This most often happens when bins are empty.
x_edges : array
The edges of the original independent variable bins.
The left and right edges of the bins are interpreted
to be x_edges[:-1]
and x_edges[1:]
,
respectively, so the associated y
should have exactly
1 fewer element than x_edges
. This provides finer
control over the size of each bin in the input than
simply supplying x
(still a little experimental)
Returns#
result : dict
A dictionary containing at least...
x
= the center of the output grid
y
= the resampled value on the output grid
x_edge_lower
= the lower edges of the output grid
x_edge_upper
= the upper edges of the output grid
...and possibly also
uncertainty
= the calculated uncertainty per bin
The order of precendence for setting the new grid is
[newx_edges
, newx
, dx
, nx
]
The first will be used, and others will be ignored.
Source code in chromatic/resampling.py
357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 |
|
calculate_bin_leftright(x)
#
If x is an array of bin centers, calculate the bin edges. (assumes outermost bins are same size as their neighbors)
Parameters#
x : array The array of bin centers.
Returns#
l : array The left edges of the bins. r : array The right edges of the bins.
Source code in chromatic/resampling.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
calculate_bin_widths(x)
#
If x is an array of bin centers, calculate the bin sizes. (assumes outermost bins are same size as their neighbors)
Parameters#
x : array The array of bin centers.
Returns#
s : array The array of bin sizes (total size, from left to right).
Source code in chromatic/resampling.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
|
edges_to_leftright(edges)
#
Convert N+1 contiguous edges to two arrays of N left/right edges.
Source code in chromatic/resampling.py
110 111 112 113 114 115 |
|
leftright_to_edges(left, right)
#
Convert two arrays of N left/right edges to N+1 continugous edges.
Source code in chromatic/resampling.py
118 119 120 121 122 123 |
|
plot_as_boxes(x, y, xleft=None, xright=None, **kwargs)
#
Plot with boxes, to show the left and right edges of a box. This is useful, or example, to plot flux associated with pixels, in case you are trying to do a sub-pixel resample or interpolation or shift.
Parameters#
x : array The original independent variable. y : array The original dependent variable (same size as x). **kwargs : dict All additional keywords will be passed to plt.plot
Source code in chromatic/resampling.py
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
|
resample_while_conserving_flux(xin=None, yin=None, xout=None, xin_edges=None, xout_edges=None, replace_nans=0.0, visualize=False, pause=False)
#
Starting from some initial x and y, resample onto a different grid (either higher or lower resolution), while conserving total flux.
When including the entire range of xin
,
sum(yout) == sum(yin)
should be true.
When including only part of the range of xin
,
the integral between any two points should be conserved.
Parameters#
xin : array
The original independent variable.
yin : array
The original dependent variable (same size as x).
xout : array
The new grid of independent variables onto which
you want to resample the y values. Refers to the
center of each bin (use xout_edges
for finer
control over the exact edges of the bins)
xin_edges : array
The edges of the original independent variable bins.
The left and right edges of the bins are interpreted
to be xin_edges[:-1]
and xin_edges[1:]
,
respectively, so the associated yin
should have exactly
1 fewer element than xin_edges
. This provides finer
control over the size of each bin in the input than
simply supplying xin
(still a little experimental)
They should probably be sorted?
xout_edges : array
The edges of the new grid of bins for the independent
variable, onto which you want to resample the y
values. The left and right edges of the bins will be,
respectively, xout_edges[:-1]
and xout_edges[1:]
,
so the size of the output array will be
len(xout_edges) - 1
replace_nans : float, str
Replace nan values with this value.
replace_nans = 0
will add no flux where nans are
replace_nans = nan
will ensure you get nans returned everywhere
if you try to resample over any nan
replace_nans = 'interpolate'
will try to replace nans by linearly interpolating
from nearby values (not yet implemented)
visualize : bool
Should we make a plot showing whether it worked?
pause : bool
Should we pause to wait for a key press?
Returns#
result : dict
A dictionary containing...
x
= the center of the output grid
y
= the resampled value on the output grid
edges
= the edges of the output grid, which will
have one more element than x or y
Source code in chromatic/resampling.py
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 |
|
expand_filenames(filepath)
#
A wrapper to expand a string or list into a list of filenames.
Source code in chromatic/imports.py
96 97 98 99 100 101 102 103 104 105 106 |
|
name2color(name)
#
Return the 3-element RGB array of a given color name.
Parameters#
name : str The name of a color
Returns#
rgb : tuple 3-element RGB color, with numbers from 0.0 to 1.0
Source code in chromatic/imports.py
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
one2another(bottom='white', top='red', alpha_bottom=1.0, alpha_top=1.0, N=256)
#
Create a cmap that goes smoothly (linearly in RGBA) from "bottom" to "top".
Parameters#
bottom : str Name of a color for the bottom of cmap (0.0) top : str Name of a color for the top of the cmap (1.0) alpha_bottom : float Opacity at the bottom of the cmap alpha_top : float Opacitiy at the top of the cmap N : int The number of levels in the listed color map
Returns#
cmap : Colormap A color map that goes linearly from the bottom to top color (and alpha).
Source code in chromatic/imports.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
|
remove_unit(x)
#
Quick wrapper to remove the unit from a quantity, but not complain if it doesn't have one.
Source code in chromatic/imports.py
174 175 176 177 178 179 180 181 182 |
|