Skip to content

Reference#

A friendly wrapper to load time-series spectra and/or multiwavelength light curves into a chromatic Rainbow object. It will try its best to pick the best reader and return the most useful kind of object. 🦋🌅2️⃣🪜🎬👀🇮🇹📕🧑‍🏫🌈

Parameters#

filepath : str, list The file or files to open. **kw : dict All other keyword arguments will be passed to the Rainbow initialization.

Returns#

rainbow : Rainbow, RainbowWithModel The loaded data!

Source code in chromatic/rainbows/__init__.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
def read_rainbow(filepath, **kw):
    """
    A friendly wrapper to load time-series spectra and/or
    multiwavelength light curves into a `chromatic` Rainbow
    object. It will try its best to pick the best reader
    and return the most useful kind of object.
    🦋🌅2️⃣🪜🎬👀🇮🇹📕🧑‍🏫🌈

    Parameters
    ----------
    filepath : str, list
        The file or files to open.
    **kw : dict
        All other keyword arguments will be passed to
        the `Rainbow` initialization.

    Returns
    -------
    rainbow : Rainbow, RainbowWithModel
        The loaded data!
    """
    r = Rainbow(filepath, **kw)
    if "model" in r.fluxlike:
        return RainbowWithModel(**r._get_core_dictionaries())
    else:
        return r

Rainbow (🌈) objects represent brightness as a function of both wavelength and time.

These objects are useful for reading or writing multiwavelength time-series datasets in a variety of formats, visualizing these data with simple commands, and performing basic calculations. RainbowWithModel and SimulatedRainbow objects inherit from Rainbow, so all basically all methods and attributes described below are available for them too.

Attributes#

wavelike : dict A dictionary for quantities with shape (nwave,), for which there's one value for each wavelength. timelike : dict A dictionary for quantities with shape(ntime,), for which there's one value for each time. fluxlike : dict A dictionary for quantities with shape (nwave,ntime), for which there's one value for each wavelength and time. metadata : dict A dictionary containing all other useful information that should stay connected to theRainbow, in any format. wavelength : Quantity The 1D array of wavelengths for thisRainbow. (This is a property, not an actual attribute.) time : Quantity The 1D array of times for thisRainbow. (This is a property, not an actual attribute.) flux : array, Quantity The 2D array of fluxes for thisRainbow. (This is a property, not an actual attribute.) uncertainty : array, Quantity The 2D array of flux uncertainties for thisRainbow. (This is a property, not an actual attribute.) ok : array The 2D array of "ok-ness" for thisRainbow. (This is a property, not an actual attribute.) shape : tuple The shape of thisRainbow's flux array. (This is a property, not an actual attribute.) nwave : int The number of wavelengths in thisRainbow'. (This is a property, not an actual attribute.) ntime : int The number of times in thisRainbow'. (This is a property, not an actual attribute.) nflux : int The total number of fluxes in thisRainbow' (=nwave*ntime). (This is a property, not an actual attribute.) dt : Quantity The typical time offset between adjacent times in thisRainbow. (This is a property, not an actual attribute.) name : array, Quantity The name of thisRainbow`, if one has been set. (This is a property, not an actual attribute.)

Source code in chromatic/rainbows/rainbow.py
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
class Rainbow:
    """
    `Rainbow` (🌈) objects represent brightness as a function
    of both wavelength and time.

    These objects are useful for reading or writing multiwavelength
    time-series datasets in a variety of formats, visualizing these
    data with simple commands, and performing basic calculations.
    `RainbowWithModel` and `SimulatedRainbow` objects inherit from
    `Rainbow`, so all basically all methods and attributes described
    below are available for them too.

    Attributes
    ----------
    wavelike : dict
        A dictionary for quantities with shape `(nwave,),
        for which there's one value for each wavelength.
    timelike : dict
        A dictionary for quantities with shape `(ntime,),
        for which there's one value for each time.
    fluxlike : dict
        A dictionary for quantities with shape `(nwave,ntime),
        for which there's one value for each wavelength and time.
    metadata : dict
        A dictionary containing all other useful information
        that should stay connected to the `Rainbow`, in any format.
    wavelength : Quantity
        The 1D array of wavelengths for this `Rainbow`.
        (This is a property, not an actual attribute.)
    time : Quantity
        The 1D array of times for this `Rainbow`.
        (This is a property, not an actual attribute.)
    flux : array, Quantity
        The 2D array of fluxes for this `Rainbow`.
        (This is a property, not an actual attribute.)
    uncertainty : array, Quantity
        The 2D array of flux uncertainties for this `Rainbow`.
        (This is a property, not an actual attribute.)
    ok : array
        The 2D array of "ok-ness" for this `Rainbow`.
        (This is a property, not an actual attribute.)
    shape : tuple
        The shape of this `Rainbow`'s flux array.
        (This is a property, not an actual attribute.)
    nwave : int
        The number of wavelengths in this `Rainbow`'.
        (This is a property, not an actual attribute.)
    ntime : int
        The number of times in this `Rainbow`'.
        (This is a property, not an actual attribute.)
    nflux : int
        The total number of fluxes in this `Rainbow`' (= `nwave*ntime`).
        (This is a property, not an actual attribute.)
    dt : Quantity
        The typical time offset between adjacent times in this `Rainbow`.
        (This is a property, not an actual attribute.)
    name : array, Quantity
        The name of this `Rainbow`, if one has been set.
        (This is a property, not an actual attribute.)
    """

    # all Rainbows must contain these core dictionaries
    _core_dictionaries = ["fluxlike", "timelike", "wavelike", "metadata"]

    # define which axis is which
    waveaxis = 0
    timeaxis = 1

    # which fluxlike keys will respond to math between objects
    _keys_that_respond_to_math = ["flux"]

    # which keys get uncertainty weighting during binning
    _keys_that_get_uncertainty_weighting = ["flux", "uncertainty"]

    def __init__(
        self,
        filepath=None,
        format=None,
        wavelength=None,
        time=None,
        flux=None,
        uncertainty=None,
        wavelike=None,
        timelike=None,
        fluxlike=None,
        metadata=None,
        name=None,
        **kw,
    ):
        """
        Initialize a `Rainbow` object.

        The `__init__` function is called when a new `Rainbow` is
        instantiated as `r = Rainbow(some, kinds, of=inputs)`.

        The options for inputs are flexible, including the possibility
        to initialize from a file, from arrays with appropriate units,
        from dictionaries with appropriate ingredients, or simply as
        an empty object if no arguments are given.

        Parameters
        ----------
        filepath : str, optional
            The filepath pointing to the file or group of files
            that should be read.
        format : str, optional
            The file format of the file to be read. If None,
            the format will be guessed automatically from the
            filepath.
        wavelength : Quantity, optional
            A 1D array of wavelengths, in any unit.
        time : Quantity, Time, optional
            A 1D array of times, in any unit.
        flux : array, optional
            A 2D array of flux values.
        uncertainty : array, optional
            A 2D array of uncertainties, associated with the flux.
        wavelike : dict, optional
            A dictionary containing 1D arrays with the same
            shape as the wavelength axis. It must at least
            contain the key 'wavelength', which should have
            astropy units of wavelength associated with it.
        timelike : dict, optional
            A dictionary containing 1D arrays with the same
            shape as the time axis. It must at least
            contain the key 'time', which should have
            astropy units of time associated with it.
        fluxlike : dict, optional
            A dictionary containing 2D arrays with the shape
            of (nwave, ntime), like flux. It must at least
            contain the key 'flux'.
        metadata : dict, optional
            A dictionary containing all other metadata
            associated with the dataset, generally lots of
            individual parameters or comments.
        **kw : dict, optional
            Additional keywords will be passed along to
            the function that initializes the rainbow.
            If initializing from arrays (`time=`, `wavelength=`,
            ...), these keywords will be interpreted as
            additional arrays that should be sorted by their
            shape into the appropriate dictionary. If
            initializing from files, the keywords will
            be passed on to the reader.

        Examples
        --------
        Initialize from a file. While this works, a more robust
        solution is probably to use `read_rainbow`, which will
        automatically choose the best of `Rainbow` and `RainbowWithModel`
        ```
        r1 = Rainbow('my-neat-file.abc', format='abcdefgh')
        ```

        Initalize from arrays. The wavelength and time must have
        appropriate units, and the shape of the flux array must
        match the size of the wavelength and time arrays. Other
        arrays that match the shape of any of these quantities
        will be stored in the appropriate location. Other inputs
        not matching any of these will be stored as `metadata.`
        ```
        r2 = Rainbow(
                wavelength=np.linspace(1, 5, 50)*u.micron,
                time=np.linspace(-0.5, 0.5, 100)*u.day,
                flux=np.random.normal(0, 1, (50, 100)),
                some_other_array=np.ones((50,100)),
                some_metadata='wow!'
        )
        ```
        Initialize from dictionaries. The dictionaries must contain
        at least `wavelike['wavelength']`, `timelike['time']`, and
        `fluxlike['flux']`, but any other additional inputs can be
        provided.
        ```
        r3 = Rainbow(
                wavelike=dict(wavelength=np.linspace(1, 5, 50)*u.micron),
                timelike=dict(time=np.linspace(-0.5, 0.5, 100)*u.day),
                fluxlike=dict(flux=np.random.normal(0, 1, (50, 100)))
        )
        ```
        """
        # create a history entry for this action (before other variables are defined)
        h = self._create_history_entry("Rainbow", locals())

        # metadata are arbitrary types of information we need
        self.metadata = {"name": name}

        # wavelike quanities are 1D arrays with nwave elements
        self.wavelike = {}

        # timelike quantities are 1D arrays with ntime elements
        self.timelike = {}

        # fluxlike quantities are 2D arrays with nwave x time elements
        self.fluxlike = {}

        # try to intialize from the exact dictionaries needed
        if (
            (type(wavelike) == dict)
            and (type(timelike) == dict)
            and (type(fluxlike) == dict)
        ):
            self._initialize_from_dictionaries(
                wavelike=wavelike,
                timelike=timelike,
                fluxlike=fluxlike,
                metadata=metadata,
            )
        # then try to initialize from arrays
        elif (wavelength is not None) and (time is not None) and (flux is not None):
            self._initialize_from_arrays(
                wavelength=wavelength,
                time=time,
                flux=flux,
                uncertainty=uncertainty,
                **kw,
            )
            if metadata is not None:
                self.metadata.update(**metadata)
        # then try to initialize from a file
        elif isinstance(filepath, str) or isinstance(filepath, list):
            self._initialize_from_file(filepath=filepath, format=format, **kw)

        # finally, tidy up by guessing the scales
        self._guess_wscale()
        self._guess_tscale()

        # append the history entry to this Rainbow
        self._setup_history()
        self._record_history_entry(h)

    def _sort(self):
        """
        Sort the wavelengths and times, from lowest to highest.
        Attach the unsorted indices to be able to work backwards.
        This sorts the object in-place (not returning a new Rainbow.)

        Returns
        -------
        sorted : Rainbow
            The sorted Rainbow.
        """

        # figure out the indices to sort from low to high
        i_wavelength = np.argsort(self.wavelength)
        i_time = np.argsort(self.time)

        if np.shape(self.flux) != (len(i_wavelength), len(i_time)):
            message = """
            Wavelength, time, and flux arrays don't match;
            the `._sort()` step is being skipped.
            """
            cheerfully_suggest(message)
            return

        if np.any(np.diff(i_wavelength) < 0):
            message = f"""
            The {self.nwave} input wavelengths were not monotonically increasing.
            {self} has been sorted from lowest to highest wavelength.
            If you want to recover the original wavelength order, the original
            wavelength indices are available in `rainbow.original_wave_index`.
            """
            cheerfully_suggest(message)

        if np.any(np.diff(i_time) < 0):
            message = f"""
            The {self.ntime} input times were not monotonically increasing.
            {self} has been sorted from lowest to highest time.
            If you want to recover the original time order, the original
            time indices are available in `rainbow.original_time_index`.
            """
            cheerfully_suggest(message)

        # attach unsorted indices to this array, if the don't exist
        if "original_wave_index" not in self.wavelike:
            self.wavelike["original_wave_index"] = np.arange(self.nwave)
        if "original_time_index" not in self.timelike:
            self.timelike["original_time_index"] = np.arange(self.ntime)

        # sort that copy by wavelength and time
        for k in self.wavelike:
            if self.wavelike[k] is not None:
                self.wavelike[k] = self.wavelike[k][i_wavelength]
        for k in self.timelike:
            if self.timelike[k] is not None:
                self.timelike[k] = self.timelike[k][i_time]
        for k in self.fluxlike:
            if self.fluxlike[k] is not None:
                wave_sorted = self.fluxlike[k][i_wavelength, :]
                self.fluxlike[k][:, :] = wave_sorted[:, i_time]

    def _validate_uncertainties(self):
        """
        Do some checks on the uncertainty values.
        """
        if self.uncertainty is None and len(self.fluxlike) > 0:
            message = f"""
            Hmmm...it's not clear which column corresponds to the
            flux uncertainties for this Rainbow object. The
            available `fluxlike` columns are:
                {self.fluxlike.keys()}
            A long-term solution might be to fix the `from_?!?!?!?`
            reader, but a short-term solution would be to pick one
            of the columns listed above and say something like

            x.fluxlike['uncertainty'] = x.fluxlike['some-other-relevant-error-column']

            where `x` is the Rainbow you just created.
            """
            cheerfully_suggest(message)
            return

        # kludge to replace zero uncertainties
        # if np.all(self.uncertainty == 0):
        #    cheerfully_suggest("\nUncertainties were all 0, replacing them with 1!")
        #        self.fluxlike["uncertainty"] = np.ones_like(self.flux)

    def _initialize_from_dictionaries(
        self, wavelike={}, timelike={}, fluxlike={}, metadata={}
    ):
        """
        Populate from dictionaries in the correct format.

        Parameters
        ----------
        wavelike : dict
            A dictionary containing 1D arrays with the same
            shape as the wavelength axis. It must at least
            contain the key 'wavelength', which should have
            astropy units of wavelength associated with it.
        timelike : dict
            A dictionary containing 1D arrays with the same
            shape as the time axis. It must at least
            contain the key 'time', which should have
            astropy units of time associated with it.
        fluxlike : dict
            A dictionary containing 2D arrays with the shape
            of (nwave, ntime), like flux. It must at least
            contain the key 'flux'.
        metadata : dict
            A dictionary containing all other metadata
            associated with the dataset, generally lots of
            individual parameters or comments.
        """

        # update the three core dictionaries of arrays
        for k in wavelike:
            self.wavelike[k] = wavelike[k] * 1
        for k in timelike:
            self.timelike[k] = timelike[k] * 1
        for k in fluxlike:
            self.fluxlike[k] = fluxlike[k] * 1
        # multiplying by 1 is a kludge to prevent accidental links

        # update the metadata
        self.metadata.update(**metadata)

        # validate that something reasonable got populated
        self._validate_core_dictionaries()

    def _get_core_dictionaries(self):
        """
        Get the core dictionaries of this Rainbow.

        Returns
        -------
        core : dict
            Dictionary containing the keys
            ['wavelike', 'timelike', 'fluxlike', 'metadata']
        """
        return {k: vars(self)[k] for k in self._core_dictionaries}

    def _initialize_from_arrays(
        self, wavelength=None, time=None, flux=None, uncertainty=None, **kw
    ):
        """
        Populate from arrays.

        Parameters
        ----------
        wavelength : Quantity, optional
            A 1D array of wavelengths, in any unit.
        time : Quantity, Time, optional
            A 1D array of times, in any unit.
        flux : array, optional
            A 2D array of flux values.
        uncertainty : array, optional
            A 2D array of uncertainties, associated with the flux.
        **kw : dict, optional
            Additional keywords will be interpreted as arrays
            that should be sorted into the appropriate location
            based on their size.
        """

        # store the wavelength
        self.wavelike["wavelength"] = wavelength * 1

        # store the time
        self.timelike["time"] = time * 1

        # store the flux and uncertainty
        self.fluxlike["flux"] = flux * 1
        if uncertainty is None:
            self.fluxlike["uncertainty"] = np.ones_like(flux) * np.nan
        else:
            self.fluxlike["uncertainty"] = uncertainty * 1

        # sort other arrays by shape
        for k, v in kw.items():
            self._put_array_in_right_dictionary(k, v)

        # validate that something reasonable got populated
        self._validate_core_dictionaries()

    def _put_array_in_right_dictionary(self, k, v):
        """
        Sort an input into the right core dictionary
        (timelike, wavelike, fluxlike) based on its shape.

        Parameters
        ----------
        k : str
            The key for the (appropriate) dictionary.
        v : array
            The quantity to sort.
        """
        if np.shape(v) == self.shape:
            self.fluxlike[k] = v * 1
        elif np.shape(v) == (self.nwave,):
            self.wavelike[k] = v * 1
        elif np.shape(v) == (self.ntime,):
            self.timelike[k] = v * 1
        else:
            raise ValueError(f"'{k}' doesn't fit anywhere!")

    def _initialize_from_file(self, filepath=None, format=None, **kw):
        """
        Populate from a filename or group of files.

        Parameters
        ----------
        filepath : str, optional
            The filepath pointing to the file or group of files
            that should be read.
        format : str, optional
            The file format of the file to be read. If None,
            the format will be guessed automatically from the
            filepath.
        **kw : dict, optional
            Additional keywords will be passed on to the reader.
        """

        # make sure we're dealing with a real filename
        assert filepath is not None

        # pick the appropriate reader
        reader = guess_reader(filepath=filepath, format=format)
        reader(self, filepath, **kw)

        # validate that something reasonable got populated
        self._validate_core_dictionaries()
        self._validate_uncertainties()
        self._guess_wscale()
        self._guess_tscale()

    def _create_copy(self):
        """
        Create a copy of self, with the core dictionaries copied.
        """
        new = type(self)()
        new._initialize_from_dictionaries(
            **copy.deepcopy(self._get_core_dictionaries())
        )
        return new

    def _guess_wscale(self, relative_tolerance=0.05):
        """
        Try to guess the wscale from the wavelengths.

        Parameters
        ----------

        relative_tolerance : float
            The fractional difference to which the differences
            between wavelengths should match in order for a
            linear or logarithmic wavelength scale to be
            assigned. For example, the default value of 0.01
            means that the differences between all wavelength
            must be within 1% of each other for the wavelength
            scale to be called linear.
        """
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")

            # give up if there's no wavelength array
            if self.wavelength is None:
                return "?"

            # calculate difference arrays
            w = self.wavelength.value
            dw = np.diff(w)
            dlogw = np.diff(np.log(w))

            # test the three options
            if np.allclose(dw, np.median(dw), rtol=relative_tolerance):
                self.metadata["wscale"] = "linear"
            elif np.allclose(dlogw, np.median(dlogw), rtol=relative_tolerance):
                self.metadata["wscale"] = "log"
            else:
                self.metadata["wscale"] = "?"

    def _guess_tscale(self, relative_tolerance=0.05):
        """
        Try to guess the tscale from the times.

        Parameters
        ----------

        relative_tolerance : float
            The fractional difference to which the differences
            between times should match in order for us to call
            the times effectively uniform, or for us to treat
            them more carefully as an irregular or gappy grid.
        """
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")

            # give up if there's no time array
            if self.time is None:
                return "?"

            # calculate difference arrays
            t = self.time.value
            dt = np.diff(t)
            with warnings.catch_warnings():
                # (don't complain about negative time)
                warnings.simplefilter("ignore")
                dlogt = np.diff(np.log(t))

            # test the three options
            if np.allclose(dt, np.median(dt), rtol=relative_tolerance):
                self.metadata["tscale"] = "linear"
            # elif np.allclose(dlogt, np.median(dlogt), rtol=relative_tolerance):
            #    self.metadata["tscale"] = "log"
            else:
                self.metadata["tscale"] = "?"

    @property
    def name(self):
        """
        The name of this `Rainbow` object.
        """
        return self.metadata.get("name", None)

    @property
    def wavelength(self):
        """
        The 1D array of wavelengths (with astropy units of length).
        """
        return self.wavelike.get("wavelength", None)

    @property
    def time(self):
        """
        The 1D array of time (with astropy units of time).
        """
        return self.timelike.get("time", None)

    @property
    def flux(self):
        """
        The 2D array of fluxes (row = wavelength, col = time).
        """
        return self.fluxlike.get("flux", None)

    @property
    def uncertainty(self):
        """
        The 2D array of uncertainties on the fluxes.
        """
        return self.fluxlike.get("uncertainty", None)

    @property
    def ok(self):
        """
        The 2D array of whether data is OK (row = wavelength, col = time).
        """

        # assemble from three possible arrays
        ok = self.fluxlike.get("ok", np.ones(self.shape).astype(bool))
        ok = (
            ok
            * self.wavelike.get("ok", np.ones(self.nwave).astype(bool))[:, np.newaxis]
        )
        ok = (
            ok
            * self.timelike.get("ok", np.ones(self.ntime).astype(bool))[np.newaxis, :]
        )

        # make sure flux is finite
        if self.flux is not None:
            ok = ok * np.isfinite(self.flux)

        # weird kludge to deal with rounding errors (particularly in two-step .bin)
        if ok.dtype == bool:
            return ok
        elif np.all((ok == 1) | (ok == 0)):
            return ok.astype(bool)
        else:
            return np.round(ok, decimals=12)

    @property
    def _time_label(self):
        return self.metadata.get("time_label", "Time")

    @property
    def _wave_label(self):
        return self.metadata.get("wave_label", "Wavelength")

    def __getattr__(self, key):
        """
        If an attribute/method isn't explicitly defined,
        try to pull it from one of the core dictionaries.

        Let's say you want to get the 2D uncertainty array
        but don't want to type `self.fluxlike['uncertainty']`.
        You could instead type `self.uncertainty`, and this
        would try to search through the four standard
        dictionaries to pull out the first `uncertainty`
        it finds.

        Parameters
        ----------
        key : str
            The attribute we're trying to get.
        """
        if key not in self._core_dictionaries:
            for dictionary_name in self._core_dictionaries:
                try:
                    return self.__dict__[dictionary_name][key]
                except KeyError:
                    pass
        message = f"🌈.{key} does not exist for this Rainbow"
        raise AttributeError(message)

    def __setattr__(self, key, value):
        """
        When setting a new attribute, try to sort it into the
        appropriate core directory based on its size.

        Let's say you have some quantity that has the same
        shape as the wavelength array and you'd like to attach
        it to this Rainbow object. This will try to save it
        in the most relevant core dictionary (of the choices
        timelike, wavelike, fluxlike).

        Parameters
        ----------
        key : str
            The attribute we're trying to get.
        value : array
            The quantity we're trying to attach to that name.
        """
        try:
            if key in self._core_dictionaries:
                raise ValueError("Trying to set a core dictionary.")
            elif key == "wavelength":
                self.wavelike["wavelength"] = value * 1
                self._validate_core_dictionaries()
            elif key == "time":
                self.timelike["time"] = value * 1
                self._validate_core_dictionaries()
            elif key in ["flux", "uncertainty", "ok"]:
                self.fluxlike[key] = value * 1
                self._validate_core_dictionaries()
            elif isinstance(value, str):
                self.metadata[key] = value
            else:
                self._put_array_in_right_dictionary(key, value)
        except (AttributeError, ValueError):
            self.__dict__[key] = value

    @property
    def _nametag(self):
        """
        This short phrase will preface everything
        said with `self.speak()`.
        """
        return f"🌈({self.nwave}w, {self.ntime}t)"

    @property
    def shape(self):
        """
        The shape of the flux array (nwave, ntime).
        """
        return (self.nwave, self.ntime)

    @property
    def nwave(self):
        """
        The number of wavelengths.
        """
        if self.wavelength is None:
            return 0
        else:
            return len(self.wavelength)

    @property
    def ntime(self):
        """
        The number of times.
        """
        if self.time is None:
            return 0
        else:
            return len(self.time)

    @property
    def dt(self):
        """
        The typical timestep.
        """
        if self.time is None:
            return None
        else:
            with warnings.catch_warnings():
                warnings.simplefilter("ignore")
                return np.nanmedian(np.diff(self.time)).to(u.minute)

    @property
    def nflux(self):
        """
        The total number of fluxes.
        """
        return np.prod(self.shape)

    def _validate_core_dictionaries(self):
        """
        Do some simple checks to make sure this Rainbow
        is populated with the minimal data needed to do anything.
        It shouldn't be run before the Rainbow is fully
        initialized; otherwise, it might complain about
        a half-populated object.
        """

        # make sure there are some times + wavelengths defined
        if self.ntime is None:
            cheerfully_suggest(
                f"""
            No times are defined for this Rainbow.
            """
            )
        if self.nwave is None:
            cheerfully_suggest(
                f"""
            No wavelengths are defined for this Rainbow.
            """
            )

        # warn if the times and wavelengths are the same size
        if (self.nwave == self.ntime) and (self.ntime is not None) and (self.ntime > 1):
            cheerfully_suggest(
                f"""
            The number of wavelengths ({self.nwave}) is the same as the
            number of times ({self.ntime}). This is fine, we suppose
            (<mock exasperated sigh>), but here are few reasons you might
            want to reconsider letting them have the same size:
                (1) Mathemetical operations and variabile assignment
                    inside this Rainbow make guesses about whether a quantity
                    is wavelike or timelike based on its shape; these features
                    will fail (or even worse do something mysterious) if
                    there are the same numbers of wavelengths and times.
                (2) For your own darn sake, if your fluxlike arrays are
                    all square, it's going to be very easy for you to accidentally
                    transpose them and not realize it.
                (3) It's very unlikely that your real data had exactly the same
                    number of times and wavelengths, so we're guessing that you
                    probably just created these arrays from scratch, which
                    hopefully means it's not too annoying to just make them
                    have different numbers of wavelengths and times.
            Thanks!
            """
            )

        # does the flux have the right shape?
        if self.shape != np.shape(self.flux):
            message = f"""
            Something doesn't line up!
            The flux array has a shape of {np.shape(self.flux)}.
            The wavelength array has {self.nwave} wavelengths.
            The time array has {self.ntime} times.
            """
            if self.shape == np.shape(self.flux)[::-1]:
                cheerfully_suggest(
                    f"""{message}
                    Any chance your flux array is transposed?
                    """
                )
            else:
                cheerfully_suggest(message)

        for n in ["uncertainty", "ok"]:
            x = getattr(self, n)
            if x is not None:
                if x.shape != np.shape(self.flux):
                    message = f"""
                    Watch out! The '{n}' array has
                    a shape of {x.shape}, which doesn't match the
                    flux array's shape of {np.shape(self.flux)}.
                    """
                    cheerfully_suggest(message)

        # make sure 2D arrays are uniquely named from 1D
        for k in tuple(self.fluxlike.keys()):
            if (k in self.wavelike) or (k in self.timelike):
                self.fluxlike[f"{k}_2d"] = self.fluxlike.pop(k)

        if "ok" in self.fluxlike:
            is_nan = np.isnan(self.fluxlike["flux"])
            self.fluxlike["ok"][is_nan] = 0

        # make sure no arrays are accidentally pointed to each other
        # (if they are, sorting will get really messed up!)
        for d in ["fluxlike", "wavelike", "timelike"]:
            core_dictionary = self.get(d)
            for k1, v1 in core_dictionary.items():
                for k2, v2 in core_dictionary.items():
                    if k1 != k2:
                        assert v1 is not v2

        self._sort()

    def _make_sure_wavelength_edges_are_defined(self):
        """
        Make sure there are some wavelength edges defined.
        """
        if self.nwave <= 1:
            return
        if ("wavelength_lower" not in self.wavelike) or (
            "wavelength_upper" not in self.wavelike
        ):
            if self.metadata.get("wscale", None) == "log":
                l, u = calculate_bin_leftright(np.log(self.wavelength.value))
                self.wavelike["wavelength_lower"] = np.exp(l) * self.wavelength.unit
                self.wavelike["wavelength_upper"] = np.exp(u) * self.wavelength.unit
            elif self.metadata.get("wscale", None) == "linear":
                l, u = calculate_bin_leftright(self.wavelength)
                self.wavelike["wavelength_lower"] = l
                self.wavelike["wavelength_upper"] = u
            else:
                l, u = calculate_bin_leftright(self.wavelength)
                self.wavelike["wavelength_lower"] = l
                self.wavelike["wavelength_upper"] = u

    def _make_sure_time_edges_are_defined(self, redo=True):
        """
        Make sure there are some time edges defined.
        """
        if self.ntime <= 1:
            return
        if (
            ("time_lower" not in self.timelike)
            or ("time_upper" not in self.timelike)
            or redo
        ):
            if self.metadata.get("tscale", None) == "log":
                lower, upper = calculate_bin_leftright(np.log(self.time.value))
                self.timelike["time_lower"] = np.exp(lower) * self.time.unit
                self.timelike["time_upper"] = np.exp(upper) * self.time.unit
            else:
                lower, upper = calculate_bin_leftright(self.time)
                self.timelike["time_lower"] = lower
                self.timelike["time_upper"] = upper

    def __getitem__(self, key):
        """
        Trim a rainbow by indexing, slicing, or masking.
        Two indices must be provided (`[:,:]`).

        Examples
        --------
        ```
        r[:,:]
        r[10:20, :]
        r[np.arange(10,20), :]
        r[r.wavelength > 1*u.micron, :]
        r[:, np.abs(r.time) < 1*u.hour]
        r[r.wavelength > 1*u.micron, np.abs(r.time) < 1*u.hour]
        ```

        Parameters
        ----------
        key : tuple
            The (wavelength, time) slices, indices, or masks.
        """

        i_wavelength, i_time = key
        # create a history entry for this action (before other variables are defined)
        h = self._create_history_entry("__getitem__", locals())

        # create a copy
        new = self._create_copy()

        # make sure we don't drop down to 1D arrays
        if isinstance(i_wavelength, int):
            i_wavelength = [i_wavelength]

        if isinstance(i_time, int):
            i_time = [i_time]

        # do indexing of wavelike
        for w in self.wavelike:
            new.wavelike[w] = self.wavelike[w][i_wavelength]

        # do indexing of timelike
        for t in self.timelike:
            new.timelike[t] = self.timelike[t][i_time]

        # do indexing of fluxlike
        for f in self.fluxlike:
            # (indexing step by step seems more stable)
            if self.fluxlike[f] is None:
                continue
            temporary = self.fluxlike[f][i_wavelength, :]
            new.fluxlike[f] = temporary[:, i_time]

        # finalize the new rainbow
        new._validate_core_dictionaries()
        new._guess_wscale()
        new._guess_tscale()

        # append the history entry to the new Rainbow
        new._record_history_entry(h)

        return new

    def __repr__(self):
        """
        How should this object be represented as a string?
        """
        n = self.__class__.__name__.replace("Rainbow", "🌈")
        if self.name is not None:
            n += f"'{self.name}'"
        return f"<{n}({self.nwave}w, {self.ntime}t)>"

    # import the basic operations for Rainbows
    from .actions.operations import (
        _apply_operation,
        _broadcast_to_fluxlike,
        _raise_ambiguous_shape_error,
        __add__,
        __sub__,
        __mul__,
        __truediv__,
        __eq__,
        diff,
    )

    # import other actions that return other Rainbows
    from .actions import (
        normalize,
        _is_probably_normalized,
        bin,
        bin_in_time,
        bin_in_wavelength,
        trim,
        trim_times,
        trim_wavelengths,
        shift,
        _create_shared_wavelength_axis,
        align_wavelengths,
        inject_transit,
        inject_systematics,
        inject_noise,
        inject_spectrum,
        inject_outliers,
        flag_outliers,
        fold,
        mask_transit,
        compare,
        get_average_lightcurve_as_rainbow,
        get_average_spectrum_as_rainbow,
        _create_fake_wavelike_quantity,
        _create_fake_timelike_quantity,
        _create_fake_fluxlike_quantity,
        remove_trends,
        attach_model,
        inflate_uncertainty,
        concatenate_in_time,
        concatenate_in_wavelength,
    )

    # import summary statistics for each wavelength
    from .get.wavelike import (
        get_average_spectrum,
        get_median_spectrum,
        get_spectral_resolution,
        get_expected_uncertainty,
        get_measured_scatter,
        get_measured_scatter_in_bins,
        get_for_wavelength,
        get_ok_data_for_wavelength,
    )

    # import summary statistics for each time
    from .get.timelike import (
        get_average_lightcurve,
        get_median_lightcurve,
        get_for_time,
        get_ok_data_for_time,
        get_times_as_astropy,
        set_times_from_astropy,
    )

    # import summary statistics for each time
    from .get.fluxlike import (
        get_ok_data,
    )

    # import visualizations that can act on Rainbows
    from .visualizations import (
        imshow,
        pcolormesh,
        scatter,
        plot_lightcurves,
        _setup_animate_lightcurves,
        animate_lightcurves,
        _setup_animate_spectra,
        animate_spectra,
        _setup_animated_scatter,
        setup_wavelength_colors,
        _make_sure_cmap_is_defined,
        get_wavelength_color,
        imshow_quantities,
        plot_quantities,
        imshow_interact,
        plot_spectra,
        plot,
        plot_histogram,
        _scatter_timelike_or_wavelike,
        _get_plot_directory,
        _label_plot_file,
        savefig,
    )

    from .visualizations.wavelike import (
        plot_spectral_resolution,
        plot_noise_comparison,
        plot_noise_comparison_in_bins,
        plot_average_spectrum,
        plot_median_spectrum,
    )

    from .visualizations.timelike import plot_average_lightcurve, plot_median_lightcurve

    from .converters import (
        to_nparray,
        to_df,
    )

    from .helpers import (
        _setup_history,
        _record_history_entry,
        _remove_last_history_entry,
        _create_history_entry,
        history,
        help,
        save,
        get,
    )

dt property #

The typical timestep.

flux property #

The 2D array of fluxes (row = wavelength, col = time).

name property #

The name of this Rainbow object.

nflux property #

The total number of fluxes.

ntime property #

The number of times.

nwave property #

The number of wavelengths.

ok property #

The 2D array of whether data is OK (row = wavelength, col = time).

shape property #

The shape of the flux array (nwave, ntime).

time property #

The 1D array of time (with astropy units of time).

uncertainty property #

The 2D array of uncertainties on the fluxes.

wavelength property #

The 1D array of wavelengths (with astropy units of length).

__getattr__(key) #

If an attribute/method isn't explicitly defined, try to pull it from one of the core dictionaries.

Let's say you want to get the 2D uncertainty array but don't want to type self.fluxlike['uncertainty']. You could instead type self.uncertainty, and this would try to search through the four standard dictionaries to pull out the first uncertainty it finds.

Parameters#

key : str The attribute we're trying to get.

Source code in chromatic/rainbows/rainbow.py
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
def __getattr__(self, key):
    """
    If an attribute/method isn't explicitly defined,
    try to pull it from one of the core dictionaries.

    Let's say you want to get the 2D uncertainty array
    but don't want to type `self.fluxlike['uncertainty']`.
    You could instead type `self.uncertainty`, and this
    would try to search through the four standard
    dictionaries to pull out the first `uncertainty`
    it finds.

    Parameters
    ----------
    key : str
        The attribute we're trying to get.
    """
    if key not in self._core_dictionaries:
        for dictionary_name in self._core_dictionaries:
            try:
                return self.__dict__[dictionary_name][key]
            except KeyError:
                pass
    message = f"🌈.{key} does not exist for this Rainbow"
    raise AttributeError(message)

__getitem__(key) #

Trim a rainbow by indexing, slicing, or masking. Two indices must be provided ([:,:]).

Examples#

r[:,:]
r[10:20, :]
r[np.arange(10,20), :]
r[r.wavelength > 1*u.micron, :]
r[:, np.abs(r.time) < 1*u.hour]
r[r.wavelength > 1*u.micron, np.abs(r.time) < 1*u.hour]

Parameters#

key : tuple The (wavelength, time) slices, indices, or masks.

Source code in chromatic/rainbows/rainbow.py
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
def __getitem__(self, key):
    """
    Trim a rainbow by indexing, slicing, or masking.
    Two indices must be provided (`[:,:]`).

    Examples
    --------
    ```
    r[:,:]
    r[10:20, :]
    r[np.arange(10,20), :]
    r[r.wavelength > 1*u.micron, :]
    r[:, np.abs(r.time) < 1*u.hour]
    r[r.wavelength > 1*u.micron, np.abs(r.time) < 1*u.hour]
    ```

    Parameters
    ----------
    key : tuple
        The (wavelength, time) slices, indices, or masks.
    """

    i_wavelength, i_time = key
    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("__getitem__", locals())

    # create a copy
    new = self._create_copy()

    # make sure we don't drop down to 1D arrays
    if isinstance(i_wavelength, int):
        i_wavelength = [i_wavelength]

    if isinstance(i_time, int):
        i_time = [i_time]

    # do indexing of wavelike
    for w in self.wavelike:
        new.wavelike[w] = self.wavelike[w][i_wavelength]

    # do indexing of timelike
    for t in self.timelike:
        new.timelike[t] = self.timelike[t][i_time]

    # do indexing of fluxlike
    for f in self.fluxlike:
        # (indexing step by step seems more stable)
        if self.fluxlike[f] is None:
            continue
        temporary = self.fluxlike[f][i_wavelength, :]
        new.fluxlike[f] = temporary[:, i_time]

    # finalize the new rainbow
    new._validate_core_dictionaries()
    new._guess_wscale()
    new._guess_tscale()

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    return new

__init__(filepath=None, format=None, wavelength=None, time=None, flux=None, uncertainty=None, wavelike=None, timelike=None, fluxlike=None, metadata=None, name=None, **kw) #

Initialize a Rainbow object.

The __init__ function is called when a new Rainbow is instantiated as r = Rainbow(some, kinds, of=inputs).

The options for inputs are flexible, including the possibility to initialize from a file, from arrays with appropriate units, from dictionaries with appropriate ingredients, or simply as an empty object if no arguments are given.

Parameters#

filepath : str, optional The filepath pointing to the file or group of files that should be read. format : str, optional The file format of the file to be read. If None, the format will be guessed automatically from the filepath. wavelength : Quantity, optional A 1D array of wavelengths, in any unit. time : Quantity, Time, optional A 1D array of times, in any unit. flux : array, optional A 2D array of flux values. uncertainty : array, optional A 2D array of uncertainties, associated with the flux. wavelike : dict, optional A dictionary containing 1D arrays with the same shape as the wavelength axis. It must at least contain the key 'wavelength', which should have astropy units of wavelength associated with it. timelike : dict, optional A dictionary containing 1D arrays with the same shape as the time axis. It must at least contain the key 'time', which should have astropy units of time associated with it. fluxlike : dict, optional A dictionary containing 2D arrays with the shape of (nwave, ntime), like flux. It must at least contain the key 'flux'. metadata : dict, optional A dictionary containing all other metadata associated with the dataset, generally lots of individual parameters or comments. **kw : dict, optional Additional keywords will be passed along to the function that initializes the rainbow. If initializing from arrays (time=, wavelength=, ...), these keywords will be interpreted as additional arrays that should be sorted by their shape into the appropriate dictionary. If initializing from files, the keywords will be passed on to the reader.

Examples#

Initialize from a file. While this works, a more robust solution is probably to use read_rainbow, which will automatically choose the best of Rainbow and RainbowWithModel

r1 = Rainbow('my-neat-file.abc', format='abcdefgh')

Initalize from arrays. The wavelength and time must have appropriate units, and the shape of the flux array must match the size of the wavelength and time arrays. Other arrays that match the shape of any of these quantities will be stored in the appropriate location. Other inputs not matching any of these will be stored as metadata.

r2 = Rainbow(
        wavelength=np.linspace(1, 5, 50)*u.micron,
        time=np.linspace(-0.5, 0.5, 100)*u.day,
        flux=np.random.normal(0, 1, (50, 100)),
        some_other_array=np.ones((50,100)),
        some_metadata='wow!'
)

Initialize from dictionaries. The dictionaries must contain at least wavelike['wavelength'], timelike['time'], and fluxlike['flux'], but any other additional inputs can be provided.

r3 = Rainbow(
        wavelike=dict(wavelength=np.linspace(1, 5, 50)*u.micron),
        timelike=dict(time=np.linspace(-0.5, 0.5, 100)*u.day),
        fluxlike=dict(flux=np.random.normal(0, 1, (50, 100)))
)
Source code in chromatic/rainbows/rainbow.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
def __init__(
    self,
    filepath=None,
    format=None,
    wavelength=None,
    time=None,
    flux=None,
    uncertainty=None,
    wavelike=None,
    timelike=None,
    fluxlike=None,
    metadata=None,
    name=None,
    **kw,
):
    """
    Initialize a `Rainbow` object.

    The `__init__` function is called when a new `Rainbow` is
    instantiated as `r = Rainbow(some, kinds, of=inputs)`.

    The options for inputs are flexible, including the possibility
    to initialize from a file, from arrays with appropriate units,
    from dictionaries with appropriate ingredients, or simply as
    an empty object if no arguments are given.

    Parameters
    ----------
    filepath : str, optional
        The filepath pointing to the file or group of files
        that should be read.
    format : str, optional
        The file format of the file to be read. If None,
        the format will be guessed automatically from the
        filepath.
    wavelength : Quantity, optional
        A 1D array of wavelengths, in any unit.
    time : Quantity, Time, optional
        A 1D array of times, in any unit.
    flux : array, optional
        A 2D array of flux values.
    uncertainty : array, optional
        A 2D array of uncertainties, associated with the flux.
    wavelike : dict, optional
        A dictionary containing 1D arrays with the same
        shape as the wavelength axis. It must at least
        contain the key 'wavelength', which should have
        astropy units of wavelength associated with it.
    timelike : dict, optional
        A dictionary containing 1D arrays with the same
        shape as the time axis. It must at least
        contain the key 'time', which should have
        astropy units of time associated with it.
    fluxlike : dict, optional
        A dictionary containing 2D arrays with the shape
        of (nwave, ntime), like flux. It must at least
        contain the key 'flux'.
    metadata : dict, optional
        A dictionary containing all other metadata
        associated with the dataset, generally lots of
        individual parameters or comments.
    **kw : dict, optional
        Additional keywords will be passed along to
        the function that initializes the rainbow.
        If initializing from arrays (`time=`, `wavelength=`,
        ...), these keywords will be interpreted as
        additional arrays that should be sorted by their
        shape into the appropriate dictionary. If
        initializing from files, the keywords will
        be passed on to the reader.

    Examples
    --------
    Initialize from a file. While this works, a more robust
    solution is probably to use `read_rainbow`, which will
    automatically choose the best of `Rainbow` and `RainbowWithModel`
    ```
    r1 = Rainbow('my-neat-file.abc', format='abcdefgh')
    ```

    Initalize from arrays. The wavelength and time must have
    appropriate units, and the shape of the flux array must
    match the size of the wavelength and time arrays. Other
    arrays that match the shape of any of these quantities
    will be stored in the appropriate location. Other inputs
    not matching any of these will be stored as `metadata.`
    ```
    r2 = Rainbow(
            wavelength=np.linspace(1, 5, 50)*u.micron,
            time=np.linspace(-0.5, 0.5, 100)*u.day,
            flux=np.random.normal(0, 1, (50, 100)),
            some_other_array=np.ones((50,100)),
            some_metadata='wow!'
    )
    ```
    Initialize from dictionaries. The dictionaries must contain
    at least `wavelike['wavelength']`, `timelike['time']`, and
    `fluxlike['flux']`, but any other additional inputs can be
    provided.
    ```
    r3 = Rainbow(
            wavelike=dict(wavelength=np.linspace(1, 5, 50)*u.micron),
            timelike=dict(time=np.linspace(-0.5, 0.5, 100)*u.day),
            fluxlike=dict(flux=np.random.normal(0, 1, (50, 100)))
    )
    ```
    """
    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("Rainbow", locals())

    # metadata are arbitrary types of information we need
    self.metadata = {"name": name}

    # wavelike quanities are 1D arrays with nwave elements
    self.wavelike = {}

    # timelike quantities are 1D arrays with ntime elements
    self.timelike = {}

    # fluxlike quantities are 2D arrays with nwave x time elements
    self.fluxlike = {}

    # try to intialize from the exact dictionaries needed
    if (
        (type(wavelike) == dict)
        and (type(timelike) == dict)
        and (type(fluxlike) == dict)
    ):
        self._initialize_from_dictionaries(
            wavelike=wavelike,
            timelike=timelike,
            fluxlike=fluxlike,
            metadata=metadata,
        )
    # then try to initialize from arrays
    elif (wavelength is not None) and (time is not None) and (flux is not None):
        self._initialize_from_arrays(
            wavelength=wavelength,
            time=time,
            flux=flux,
            uncertainty=uncertainty,
            **kw,
        )
        if metadata is not None:
            self.metadata.update(**metadata)
    # then try to initialize from a file
    elif isinstance(filepath, str) or isinstance(filepath, list):
        self._initialize_from_file(filepath=filepath, format=format, **kw)

    # finally, tidy up by guessing the scales
    self._guess_wscale()
    self._guess_tscale()

    # append the history entry to this Rainbow
    self._setup_history()
    self._record_history_entry(h)

__repr__() #

How should this object be represented as a string?

Source code in chromatic/rainbows/rainbow.py
942
943
944
945
946
947
948
949
def __repr__(self):
    """
    How should this object be represented as a string?
    """
    n = self.__class__.__name__.replace("Rainbow", "🌈")
    if self.name is not None:
        n += f"'{self.name}'"
    return f"<{n}({self.nwave}w, {self.ntime}t)>"

__setattr__(key, value) #

When setting a new attribute, try to sort it into the appropriate core directory based on its size.

Let's say you have some quantity that has the same shape as the wavelength array and you'd like to attach it to this Rainbow object. This will try to save it in the most relevant core dictionary (of the choices timelike, wavelike, fluxlike).

Parameters#

key : str The attribute we're trying to get. value : array The quantity we're trying to attach to that name.

Source code in chromatic/rainbows/rainbow.py
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
def __setattr__(self, key, value):
    """
    When setting a new attribute, try to sort it into the
    appropriate core directory based on its size.

    Let's say you have some quantity that has the same
    shape as the wavelength array and you'd like to attach
    it to this Rainbow object. This will try to save it
    in the most relevant core dictionary (of the choices
    timelike, wavelike, fluxlike).

    Parameters
    ----------
    key : str
        The attribute we're trying to get.
    value : array
        The quantity we're trying to attach to that name.
    """
    try:
        if key in self._core_dictionaries:
            raise ValueError("Trying to set a core dictionary.")
        elif key == "wavelength":
            self.wavelike["wavelength"] = value * 1
            self._validate_core_dictionaries()
        elif key == "time":
            self.timelike["time"] = value * 1
            self._validate_core_dictionaries()
        elif key in ["flux", "uncertainty", "ok"]:
            self.fluxlike[key] = value * 1
            self._validate_core_dictionaries()
        elif isinstance(value, str):
            self.metadata[key] = value
        else:
            self._put_array_in_right_dictionary(key, value)
    except (AttributeError, ValueError):
        self.__dict__[key] = value

Bases: Rainbow

RainbowWithModel objects have a fluxlike model attached to them, meaning that they can

This class definition inherits from Rainbow.

Source code in chromatic/rainbows/withmodel.py
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
class RainbowWithModel(Rainbow):
    """
    `RainbowWithModel` objects have a fluxlike `model`
    attached to them, meaning that they can

    This class definition inherits from `Rainbow`.
    """

    # which fluxlike keys will respond to math between objects
    _keys_that_respond_to_math = ["flux", "model"]

    # which keys get uncertainty weighting during binning
    _keys_that_get_uncertainty_weighting = ["flux", "model", "uncertainty"]

    @property
    def residuals(self):
        """
        Calculate the residuals on the fly,
        to make sure they're always up to date.

        The residuals are calculated simply
        as the `.flux` - `.model`, so they are
        in whatever units those arrays have.

        Returns
        -------
        residuals : array, Quantity
            The 2D array of residuals (nwave, ntime).
        """
        return self.flux - self.model

    @property
    def chi_squared(self):
        """
        Calculate $\chi^2$.

        This calculates the sum of the squares of
        the uncertainty-normalized residuals,
        sum(((flux - model)/uncertainty)**2)

        Data points marked as not OK are ignored.

        Returns
        -------
        chi_squared : float
            The chi-squared value.
        """
        r = (self.flux - self.model) / self.uncertainty
        return np.sum(r[self.ok] ** 2)

    @property
    def residuals_plus_one(self):
        """
        A tiny wrapper to get the residuals + 1.

        Returns
        -------
        residuals_plus_one : array, Quantity
            The 2D array of residuals + 1 (nwave, ntime).
        """
        return self.flux - self.model + 1

    @property
    def ones(self):
        """
        Generate an array of ones that looks like the flux.
        (A tiny wrapper needed for `plot_with_model`)

        Returns
        -------
        ones : array, Quantity
            The 2D array ones (nwave, ntime).
        """
        return np.ones_like(self.flux)

    def _validate_core_dictionaries(self):
        super()._validate_core_dictionaries()
        try:
            model = self.get("model")
            assert np.shape(model) == np.shape(self.flux)
        except (AttributeError, AssertionError):
            message = """
            No fluxlike 'model' was found attached to this
            `RainbowWithModel` object. The poor thing,
            its name is a lie! Please connect a model.
            The simplest way to do so might look like...
            `rainbow.model = np.ones(rainbow.shape)`
            ...or similarly with a more interesting array.
            """
            cheerfully_suggest(message)

    from .visualizations import (
        plot_with_model,
        plot_with_model_and_residuals,
        imshow_with_models,
        plot_one_wavelength_with_models,
        animate_with_models,
    )

chi_squared property #

Calculate $\chi^2$.

This calculates the sum of the squares of the uncertainty-normalized residuals, sum(((flux - model)/uncertainty)**2)

Data points marked as not OK are ignored.

Returns#

chi_squared : float The chi-squared value.

ones property #

Generate an array of ones that looks like the flux. (A tiny wrapper needed for plot_with_model)

Returns#

ones : array, Quantity The 2D array ones (nwave, ntime).

residuals property #

Calculate the residuals on the fly, to make sure they're always up to date.

The residuals are calculated simply as the .flux - .model, so they are in whatever units those arrays have.

Returns#

residuals : array, Quantity The 2D array of residuals (nwave, ntime).

residuals_plus_one property #

A tiny wrapper to get the residuals + 1.

Returns#

residuals_plus_one : array, Quantity The 2D array of residuals + 1 (nwave, ntime).

Bases: RainbowWithModel

SimulatedRainbow objects are created from scratch within chromatic, with options for various different wavelength grids, time grids, noise sources, and injected models. They can be useful for generating quick simulated dataset for testing analysis and visualization tools.

This class definition inherits from RainbowWithModel, which itself inherits from Rainbow.

Source code in chromatic/rainbows/simulated.py
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
class SimulatedRainbow(RainbowWithModel):
    """
    `SimulatedRainbow` objects are created from scratch
    within `chromatic`, with options for various different
    wavelength grids, time grids, noise sources, and injected
    models. They can be useful for generating quick simulated
    dataset for testing analysis and visualization tools.

    This class definition inherits from `RainbowWithModel`,
    which itself inherits from `Rainbow`.
    """

    def __init__(
        self,
        tlim=[-2.5, 2.5] * u.hour,
        dt=2 * u.minute,
        time=None,
        wlim=[0.5, 5] * u.micron,
        R=100,
        dw=None,
        wavelength=None,
        star_flux=None,
        name=None,
        signal_to_noise=None,
    ):
        """
        Initialize a `SimulatedRainbow` object from some parameters.

        This sets up an effectively empty `Rainbow` with defined
        wavelengths and times. For making more interesting
        simulated datasets, this will often be paired with
        some combination of the `.inject...` actions that inject
        various astrophysical, instrumental, or noise signatures
        into the dataset.

        The time-setting order of precendence is:
            1) time
            2) tlim + dt

        The wavelength-setting order of precendence is:
            1) wavelength
            2) wlim + dw
            3) wlim + R

        Parameters
        ----------
        tlim : list or Quantity
            The pip install -e '.[develop]'[min, max] times for creating the time grid.
            These should have astropy units of time.
        dt : Quantity
            The d(time) bin size for creating a grid
            that is uniform in linear space.
        time : Quantity
            An array of times, if you just want to give
            it an entirely custom array.
        wlim : list or Quantity
            The [min, max] wavelengths for creating the grid.
            These should have astropy units of wavelength.
        R : float
            The spectral resolution for creating a grid
            that is uniform in logarithmic space.
        dw : Quantity
            The d(wavelength) bin size for creating a grid
            that is uniform in linear space.
        wavelength : Quantity
            An array of wavelengths, if you just want to give
            it an entirely custom array.
        star_flux : numpy 1D array
            An array of fluxes corresponding to the supplied wavelengths.
            If left blank, the code assumes a normalized flux of
            flux(wavelength) = 1 for all wavelengths.
        """
        Rainbow.__init__(self)

        # (remove the history entry from creating the Rainbow)
        self._remove_last_history_entry()

        # create a history entry for this action (before other variables are defined)
        h = self._create_history_entry("SimulatedRainbow", locals())

        # set up the wavelength grid
        self._setup_fake_wavelength_grid(wlim=wlim, R=R, dw=dw, wavelength=wavelength)

        # set up the time grid
        self._setup_fake_time_grid(tlim=tlim, dt=dt, time=time)

        # save the basic inputs that aren't stored elsewhere
        self.metadata["name"] = name

        # If the flux of the star is not given,
        # assume a continuum-normlized flux where fx=1 at all wavelengths.
        if star_flux is None:
            model = np.ones(self.shape)

        # If the flux vs wavelength of the star is supplied,
        # include it in the model.
        else:
            # Check to make sure the flux and wavelengths
            # have the same shape.
            if len(star_flux) == len(self.wavelike["wavelength"]):
                model = np.transpose([star_flux] * self.shape[1])
            elif len(star_flux) == 1:
                model = star_flux * np.ones(self.shape)

        # Set uncertainty.
        self.fluxlike["flux"] = model * 1
        self.fluxlike["model"] = model * 1
        self.fluxlike["uncertainty"] = np.zeros(self.shape)

        # make sure everything is defined and sorted
        self._validate_core_dictionaries()

        if signal_to_noise is not None:
            message = f"""
            You tried to specify the noise level with
            `SimulatedRainbow(signal_to_noise={signal_to_noise})`,
            but that functionality is going away soon.
            Please replace it right now with
            `SimulatedRainbow().inject_noise(signal_to_noise={signal_to_noise})`
            so that your code will continue to work.
            You're getting away with it this time,
            but it won't work for much longer!
            """
            cheerfully_suggest(message)
            new = self.inject_noise()
            for k in ["flux", "uncertainty", "model"]:
                self.fluxlike[k] = new.fluxlike[k]

        # append the history entry to the new Rainbow
        self._record_history_entry(h)

    def _setup_fake_time_grid(
        self, tlim=[-2.5 * u.hour, 2.5 * u.hour], dt=1 * u.minute, time=None
    ):
        """
        Create a fake time grid.

        Parameters
        ----------

        tlim : list or Quantity
            The [min, max] times for creating the time grid.
            These should have astropy units of time.
        dt : Quantity
            The d(time) bin size for creating a grid
            that is uniform in linear space.
        time : Quantity
            An array of times, if you just want to give
            it an entirely custom array.

        The time-setting order of precendence is:
            1) time
            2) tlim + dt
        """
        # check we're trying to do exactly one thing
        if (tlim is None) and (time is None):
            raise RuntimeError("Please specify either `tlim` or `time`.")

        if time is None:
            t_unit = tlim[0].unit
            t_unit.to("s")
            time = np.arange(tlim[0] / t_unit, tlim[1] / t_unit, dt / t_unit) * t_unit
        else:
            t_unit = time.unit

        self.timelike["time"] = u.Quantity(time).to(u.day)
        # TODO, make this match up better with astropy time

        self._guess_tscale()

    def _setup_fake_wavelength_grid(
        self, wlim=[0.5 * u.micron, 5 * u.micron], R=100, dw=None, wavelength=None
    ):
        """
        Create a fake wavelength grid.

        Parameters
        ----------

        wlim : list or Quantity
            The [min, max] wavelengths for creating the grid.
            These should have astropy units of wavelength.
        R : float
            The spectral resolution for creating a grid
            that is uniform in logarithmic space.
        dw : Quantity
            The d(wavelength) bin size for creating a grid
            that is uniform in linear space.
        wavelength : Quantity
            An array of wavelengths, if you just want to give
            it an entirely custom array.

        The wavelength-setting order of precendence is:
            1) wavelength
            2) wlim + dw
            3) wlim + R
        """

        # check we're trying to do exactly one thing
        if (wlim is None) and (wavelength is None):
            raise RuntimeError("Please specify either `wlim` or `wavelength`.")

        # create a linear or logarithmic grid
        if wavelength is None:
            # check that we're
            if (R is None) and (dw is None):
                raise RuntimeError("Please specify either `R` or `dw`.")

            w_unit = wlim[0].unit
            if dw is None:
                self.metadata["R"] = R
                # self.metadata["wscale"] = "log"

                logw_min = np.log(wlim[0] / w_unit)
                logw_max = np.log(wlim[1] / w_unit)
                logw = np.arange(logw_min, logw_max, 1 / R)
                wavelength = np.exp(logw) * w_unit

            elif dw is not None:
                self.metadata["dw"] = dw
                # self.metadata["wscale"] = "linear"
                wavelength = (
                    np.arange(wlim[0] / w_unit, wlim[1] / w_unit, self.dw / w_unit)
                    * w_unit
                )

        # or just make sure the wavelength grid has units
        elif wavelength is not None:
            w_unit = wavelength.unit

        # make sure the wavelength array has units
        self.wavelike["wavelength"] = u.Quantity(wavelength).to(u.micron)
        self._guess_wscale()

__init__(tlim=[-2.5, 2.5] * u.hour, dt=2 * u.minute, time=None, wlim=[0.5, 5] * u.micron, R=100, dw=None, wavelength=None, star_flux=None, name=None, signal_to_noise=None) #

Initialize a SimulatedRainbow object from some parameters.

This sets up an effectively empty Rainbow with defined wavelengths and times. For making more interesting simulated datasets, this will often be paired with some combination of the .inject... actions that inject various astrophysical, instrumental, or noise signatures into the dataset.

The time-setting order of precendence is

1) time 2) tlim + dt

The wavelength-setting order of precendence is

1) wavelength 2) wlim + dw 3) wlim + R

Parameters#

tlim : list or Quantity The pip install -e '.[develop]'[min, max] times for creating the time grid. These should have astropy units of time. dt : Quantity The d(time) bin size for creating a grid that is uniform in linear space. time : Quantity An array of times, if you just want to give it an entirely custom array. wlim : list or Quantity The [min, max] wavelengths for creating the grid. These should have astropy units of wavelength. R : float The spectral resolution for creating a grid that is uniform in logarithmic space. dw : Quantity The d(wavelength) bin size for creating a grid that is uniform in linear space. wavelength : Quantity An array of wavelengths, if you just want to give it an entirely custom array. star_flux : numpy 1D array An array of fluxes corresponding to the supplied wavelengths. If left blank, the code assumes a normalized flux of flux(wavelength) = 1 for all wavelengths.

Source code in chromatic/rainbows/simulated.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
def __init__(
    self,
    tlim=[-2.5, 2.5] * u.hour,
    dt=2 * u.minute,
    time=None,
    wlim=[0.5, 5] * u.micron,
    R=100,
    dw=None,
    wavelength=None,
    star_flux=None,
    name=None,
    signal_to_noise=None,
):
    """
    Initialize a `SimulatedRainbow` object from some parameters.

    This sets up an effectively empty `Rainbow` with defined
    wavelengths and times. For making more interesting
    simulated datasets, this will often be paired with
    some combination of the `.inject...` actions that inject
    various astrophysical, instrumental, or noise signatures
    into the dataset.

    The time-setting order of precendence is:
        1) time
        2) tlim + dt

    The wavelength-setting order of precendence is:
        1) wavelength
        2) wlim + dw
        3) wlim + R

    Parameters
    ----------
    tlim : list or Quantity
        The pip install -e '.[develop]'[min, max] times for creating the time grid.
        These should have astropy units of time.
    dt : Quantity
        The d(time) bin size for creating a grid
        that is uniform in linear space.
    time : Quantity
        An array of times, if you just want to give
        it an entirely custom array.
    wlim : list or Quantity
        The [min, max] wavelengths for creating the grid.
        These should have astropy units of wavelength.
    R : float
        The spectral resolution for creating a grid
        that is uniform in logarithmic space.
    dw : Quantity
        The d(wavelength) bin size for creating a grid
        that is uniform in linear space.
    wavelength : Quantity
        An array of wavelengths, if you just want to give
        it an entirely custom array.
    star_flux : numpy 1D array
        An array of fluxes corresponding to the supplied wavelengths.
        If left blank, the code assumes a normalized flux of
        flux(wavelength) = 1 for all wavelengths.
    """
    Rainbow.__init__(self)

    # (remove the history entry from creating the Rainbow)
    self._remove_last_history_entry()

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("SimulatedRainbow", locals())

    # set up the wavelength grid
    self._setup_fake_wavelength_grid(wlim=wlim, R=R, dw=dw, wavelength=wavelength)

    # set up the time grid
    self._setup_fake_time_grid(tlim=tlim, dt=dt, time=time)

    # save the basic inputs that aren't stored elsewhere
    self.metadata["name"] = name

    # If the flux of the star is not given,
    # assume a continuum-normlized flux where fx=1 at all wavelengths.
    if star_flux is None:
        model = np.ones(self.shape)

    # If the flux vs wavelength of the star is supplied,
    # include it in the model.
    else:
        # Check to make sure the flux and wavelengths
        # have the same shape.
        if len(star_flux) == len(self.wavelike["wavelength"]):
            model = np.transpose([star_flux] * self.shape[1])
        elif len(star_flux) == 1:
            model = star_flux * np.ones(self.shape)

    # Set uncertainty.
    self.fluxlike["flux"] = model * 1
    self.fluxlike["model"] = model * 1
    self.fluxlike["uncertainty"] = np.zeros(self.shape)

    # make sure everything is defined and sorted
    self._validate_core_dictionaries()

    if signal_to_noise is not None:
        message = f"""
        You tried to specify the noise level with
        `SimulatedRainbow(signal_to_noise={signal_to_noise})`,
        but that functionality is going away soon.
        Please replace it right now with
        `SimulatedRainbow().inject_noise(signal_to_noise={signal_to_noise})`
        so that your code will continue to work.
        You're getting away with it this time,
        but it won't work for much longer!
        """
        cheerfully_suggest(message)
        new = self.inject_noise()
        for k in ["flux", "uncertainty", "model"]:
            self.fluxlike[k] = new.fluxlike[k]

    # append the history entry to the new Rainbow
    self._record_history_entry(h)

🌈 Helpers#

Retrieve an attribute by its string name. (This is a friendlier wrapper for getattr()).

r.get('flux') is identical to r.flux

This is different from indexing directly into a core dictionary (for example, r.fluxlike['flux']), because it can also be used to get the results of properties that do calculations on the fly (for example, r.residuals in the RainbowWithModel class).

Parameters#

key : str The name of the attribute, property, or core dictionary item to get. default : any, optional What to return if the attribute can't be found.

Returns#

thing : any The thing you were trying to get. If unavailable, return the default (which by default is None)

Source code in chromatic/rainbows/helpers/get.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def get(self, key, default=None):
    """
    Retrieve an attribute by its string name.
    (This is a friendlier wrapper for `getattr()`).

    `r.get('flux')` is identical to `r.flux`

    This is different from indexing directly into
    a core dictionary (for example, `r.fluxlike['flux']`),
    because it can also be used to get the results of
    properties that do calculations on the fly (for example,
    `r.residuals` in the `RainbowWithModel` class).

    Parameters
    ----------
    key : str
        The name of the attribute, property, or core dictionary item to get.
    default : any, optional
        What to return if the attribute can't be found.

    Returns
    -------
    thing : any
        The thing you were trying to get. If unavailable,
        return the `default` (which by default is `None`)
    """
    try:
        return getattr(self, key)
    except AttributeError:
        return default

Print a quick reference of key actions available for this Rainbow.

Source code in chromatic/rainbows/helpers/help.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def help(self):
    """
    Print a quick reference of key actions available for this `Rainbow`.
    """
    print(
        textwrap.dedent(
            """
    Hooray for you! You asked for help on what you can do
    with this 🌈 object. Here's a quick reference of a few
    available options for things to try."""
        )
    )

    base_directory = pkg_resources.resource_filename("chromatic", "rainbows")
    descriptions_files = []
    for level in ["*", "*/*"]:
        descriptions_files += glob.glob(
            os.path.join(base_directory, level, "descriptions.txt")
        )
    categories = [
        d.replace(base_directory + "/", "").replace("/descriptions.txt", "")
        for d in descriptions_files
    ]
    for i in np.argsort(categories):
        c, d = categories[i], descriptions_files[i]
        header = (
            "\n" + "-" * (len(c) + 4) + "\n" + f"| {c} |\n" + "-" * (len(c) + 4) + "\n"
        )

        table = ascii.read(d)
        items = []
        for row in table:
            name = row["name"]
            if hasattr(self, name) or (name in ["+-*/", "[:,:]"]):
                if name in "+-*/":
                    function_call = f"{name}"
                else:
                    function_call = f".{name}()"

                item = (
                    f"{row['cartoon']} | {function_call:<28} \n   {row['description']}"
                )
                items.append(item)
        if len(items) > 0:
            print(header)
            print("\n".join(items))

Return a summary of the history of actions that have gone into this Rainbow.

Returns#

history : str A string that does its best to try to summarize all the actions that have been applied to this Rainbow object from the moment it was created. In some (but not all) cases, it may be possible to copy, paste, and rerun this code to recreate the Rainbow.

Source code in chromatic/rainbows/helpers/history.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
def history(self):
    """
    Return a summary of the history of actions that have gone into this `Rainbow`.

    Returns
    -------
    history : str
        A string that does its best to try to summarize
        all the actions that have been applied to this
        `Rainbow` object from the moment it was created.
        In some (but not all) cases, it may be possible
        to copy, paste, and rerun this code to recreate
        the `Rainbow`.
    """

    calls = self.metadata["history"]
    return "(\n" + "\n".join(calls) + "\n)"

Save this Rainbow out to a file.

Parameters#

filepath : str The filepath pointing to the file to be written. (For now, it needs a .rainbow.npy extension.) format : str, optional The file format of the file to be written. If None, the format will be guessed automatically from the filepath. **kw : dict, optional All other keywords will be passed to the writer.

Source code in chromatic/rainbows/helpers/save.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def save(self, filepath="test.rainbow.npy", format=None, **kw):
    """
    Save this `Rainbow` out to a file.

    Parameters
    ----------
    filepath : str
        The filepath pointing to the file to be written.
        (For now, it needs a `.rainbow.npy` extension.)
    format : str, optional
        The file format of the file to be written. If `None`,
        the format will be guessed automatically from the
        filepath.
    **kw : dict, optional
        All other keywords will be passed to the writer.
    """
    # figure out the best writer
    writer = guess_writer(filepath, format=format)

    # use that writer to save the file
    writer(self, filepath, **kw)

🌈 Actions#

Use 2D wavelength information to align onto a single 1D wavelength array.

This relies on the existence of a .fluxlike['wavelength_2d'] array, expressing the wavelength associated with each flux element. Those wavelengths will be used to (a) establish a new compromise wavelength grid and (b) bin the individual timepoints onto that new grid, effectively shifting the wavelengths to align.

Parameters#

minimum_acceptable_ok : float, optional The numbers in the .ok attribute express "how OK?" each data point is, ranging from 0 (not OK) to 1 (super OK). In most cases, .ok will be binary, but there may be times where it's intermediate (for example, if a bin was created from some data that were not OK and some that were). The minimum_acceptable_ok parameter allows you to specify what level of OK-ness for a point to go into the binning. Reasonable options may include: minimum_acceptable_ok = 1 Only data points that are perfectly OK will go into the binning. All other points will effectively be interpolated over. Flux uncertainties should be inflated appropriately, but it's very possible to create correlated bins next to each other if many of your ingoing data points are not perfectly OK. minimum_acceptable_ok = 1 All data points that aren't definitely not OK will go into the binning. The OK-ness of points will propagate onward for future binning. minimum_acceptable_ok < 0 All data points will be included in the bin. The OK-ness will propagate onward. wscale : str, optional What kind of a new wavelength axis should be created? Options include: 'linear' = constant d[wavelength] between grid points 'log' = constant d[wavelength]/[wavelength] between grid points 'nonlinear' = the median wavelength grid for all time points supersampling : float, optional By how many times should we increase or decrease the wavelength sampling? In general, values >1 will split each input wavelength grid point into multiple supersampled wavelength grid points, values close to 1 will produce approximately one output wavelength for each input wavelength, and values <1 will average multiple input wavelengths into a single output wavelength bin. Unless this is significantly less than 1, there's a good chance your output array may have strong correlations between one or more adjacent wavelengths. Be careful when trying to use the resulting uncertainties! visualize : bool Should we make some plots showing how the shared wavelength axis compares to the original input wavelength axes?

Returns#

rainbow : RainbowWithModel A new RainbowWithModel object, with the model attached.

Source code in chromatic/rainbows/actions/align_wavelengths.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
def align_wavelengths(
    self,
    minimum_acceptable_ok=1,
    minimum_points_per_bin=0,
    wscale="linear",
    supersampling=1,
    visualize=False,
):
    """
    Use 2D wavelength information to align onto a single 1D wavelength array.

    This relies on the existence of a `.fluxlike['wavelength_2d']` array,
    expressing the wavelength associated with each flux element.
    Those wavelengths will be used to (a) establish a new compromise
    wavelength grid and (b) bin the individual timepoints onto that
    new grid, effectively shifting the wavelengths to align.

    Parameters
    ----------
    minimum_acceptable_ok : float, optional
        The numbers in the `.ok` attribute express "how OK?" each
        data point is, ranging from 0 (not OK) to 1 (super OK).
        In most cases, `.ok` will be binary, but there may be times
        where it's intermediate (for example, if a bin was created
        from some data that were not OK and some that were).
        The `minimum_acceptable_ok` parameter allows you to specify what
        level of OK-ness for a point to go into the binning.
        Reasonable options may include:
            minimum_acceptable_ok = 1
                  Only data points that are perfectly OK
                  will go into the binning. All other points
                  will effectively be interpolated over. Flux
                  uncertainties *should* be inflated appropriately,
                  but it's very possible to create correlated
                  bins next to each other if many of your ingoing
                  data points are not perfectly OK.
            minimum_acceptable_ok = 1
                  All data points that aren't definitely not OK
                  will go into the binning. The OK-ness of points
                  will propagate onward for future binning.
            minimum_acceptable_ok < 0
                  All data points will be included in the bin.
                  The OK-ness will propagate onward.
    wscale : str, optional
        What kind of a new wavelength axis should be created?
        Options include:
            'linear' = constant d[wavelength] between grid points
            'log' = constant d[wavelength]/[wavelength] between grid points
            'nonlinear' = the median wavelength grid for all time points
    supersampling : float, optional
        By how many times should we increase or decrease the wavelength sampling?
        In general, values >1 will split each input wavelength grid point into
        multiple supersampled wavelength grid points, values close to 1 will
        produce approximately one output wavelength for each input wavelength,
        and values <1 will average multiple input wavelengths into a single output
        wavelength bin.
        Unless this is significantly less than 1, there's a good chance your output
        array may have strong correlations between one or more adjacent wavelengths.
        Be careful when trying to use the resulting uncertainties!
    visualize : bool
        Should we make some plots showing how the shared wavelength
        axis compares to the original input wavelength axes?

    Returns
    -------
    rainbow : RainbowWithModel
        A new `RainbowWithModel` object, with the model attached.
    """
    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("align_wavelengths", locals())

    if "wavelength_2d" not in self.fluxlike:
        cheerfully_suggest(
            f"""
        No 2D wavelength information was found, so
        it's assumed wavelengths don't need to be aligned.
        Wavelength alignment is being skipped!
        """
        )
        shifted = self._create_copy()
    else:
        # create a shared wavelength array
        shared_wavelengths = self._create_shared_wavelength_axis(
            wscale=wscale, supersampling=supersampling, visualize=visualize
        )

        with warnings.catch_warnings():
            warnings.simplefilter("ignore")

            # bin the rainbow onto that new grid, starting from 2D wavelengths
            shifted = self.bin_in_wavelength(
                wavelength=shared_wavelengths,
                minimum_acceptable_ok=minimum_acceptable_ok,
                starting_wavelengths="2D",
                minimum_points_per_bin=minimum_points_per_bin,
            )

    # append the history entry to the new Rainbow
    shifted._record_history_entry(h)

    # return the new Rainbow
    return shifted

Attach a fluxlike model, thus making a new RainbowWithModel.

Having a model attached makes it possible to make calculations (residuals, chi^2) and visualizations comparing data to model.

The model array will be stored in .fluxlike['model']. After running this to make a RainbowWithModel it's OK (and faster) to simply update .fluxlike['model'] or .model.

Parameters#

model : array, Quantity An array of model values, with the same shape as 'flux' **kw : dict, optional All other keywords will be interpreted as items that can be added to a Rainbow. You might use this to attach intermediate model steps or quantities. Variable names ending with _model can be particularly easily incorporated into multi-part model visualizations (for example, 'planet_model' or 'systematics_model').

Returns#

rainbow : RainbowWithModel A new RainbowWithModel object, with the model attached.

Source code in chromatic/rainbows/actions/attach_model.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
def attach_model(self, model, **kw):
    """
    Attach a `fluxlike` model, thus making a new `RainbowWithModel.`

    Having a model attached makes it possible to make calculations
    (residuals, chi^2) and visualizations comparing data to model.

    The `model` array will be stored in `.fluxlike['model']`.
    After running this to make a `RainbowWithModel` it's OK
    (and faster) to simply update `.fluxlike['model']` or `.model`.

    Parameters
    ----------
    model : array, Quantity
        An array of model values, with the same shape as 'flux'
    **kw : dict, optional
        All other keywords will be interpreted as items
        that can be added to a `Rainbow`. You might use this
        to attach intermediate model steps or quantities.
        Variable names ending with `_model` can be particularly
        easily incorporated into multi-part model visualizations
        (for example, `'planet_model'` or `'systematics_model'`).


    Returns
    -------
    rainbow : RainbowWithModel
        A new `RainbowWithModel` object, with the model attached.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("attach_model", locals())

    # make sure the shape is reasonable
    assert np.shape(model) == np.shape(self.flux)

    # add the model to the fluxlike array
    inputs = self._create_copy()._get_core_dictionaries()
    inputs["fluxlike"]["model"] = model

    # import here (rather than globally) to avoid recursion?
    from ..withmodel import RainbowWithModel

    # create new object
    new = RainbowWithModel(**inputs)

    # add other inputs to the model
    for k, v in kw.items():
        new.__setattr__(k, v)

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the RainboWithModel
    return new

bin(self, dt=None, time=None, time_edges=None, ntimes=None, R=None, dw=None, wavelength=None, wavelength_edges=None, nwavelengths=None, minimum_acceptable_ok=1, minimum_points_per_bin=None, trim=True) #

Bin in wavelength and/or time.

Average together some number of adjacent data points, in wavelength and/or time. For well-behaved data where data points are independent from each other, binning down by N data points should decrease the noise per bin by approximately 1/sqrt(N), making it easier to see subtle signals. To bin data points together, data are combined using inverse-variance weighting through interpolation of cumulative distributions, in an attempt to make sure that flux integrals between limits are maintained.

Currently, the inverse-variance weighting is most reliable only for datasets that have been normalized to be close to 1. We still need to do a little work to make sure it works well on unnormalized datasets with dramatically non-uniform uncertainties.

By default, time binning happens before wavelength binning. To control the order, use separate calls to .bin().

The time-setting order of precendence is [time_edges, time, dt, ntimes] The first will be used, and others will be ignored.

The wavelength-setting order of precendence is [wavelength_edges, wavelength, dw, R, nwavelengths] The first will be used, and others will be ignored.

Parameters#

dt : Quantity The d(time) bin size for creating a grid that is uniform in linear space. time : Quantity An array of times, if you just want to give it an entirely custom array. The widths of the bins will be guessed from the centers (well, if the spacing is uniform constant; pretty well but not perfectly otherwise). time_edges : Quantity An array of times for the edges of bins, if you just want to give an entirely custom array. The bins will span time_edges[:-1] to time_edges[1:], so the resulting binned Rainbow will have len(time_edges) - 1 time bins associated with it. ntimes : int A fixed number of time to bin together. Binning will start from the 0th element of the starting times; if you want to start from a different index, trim before binning. R : float The spectral resolution for creating a grid that is uniform in logarithmic space. dw : Quantity The d(wavelength) bin size for creating a grid that is uniform in linear space. wavelength : Quantity An array of wavelengths for the centers of bins, if you just want to give an entirely custom array. The widths of the bins will be guessed from the centers (well, if the spacing is uniform constant; pretty well but not perfectly otherwise). wavelength_edges : Quantity An array of wavelengths for the edges of bins, if you just want to give an entirely custom array. The bins will span wavelength_edges[:-1] to wavelength_edges[1:], so the resulting binned Rainbow will have len(wavelength_edges) - 1 wavelength bins associated with it. nwavelengths : int A fixed number of wavelengths to bin together. Binning will start from the 0th element of the starting wavelengths; if you want to start from a different index, trim before binning. minimum_acceptable_ok : float The numbers in the .ok attribute express "how OK?" each data point is, ranging from 0 (not OK) to 1 (super OK). In most cases, .ok will be binary, but there may be times where it's intermediate (for example, if a bin was created from some data that were not OK and some that were). The minimum_acceptable_ok parameter allows you to specify what level of OK-ness for a point to go into the binning. Reasonable options may include: minimum_acceptable_ok = 1 Only data points that are perfectly OK will go into the binning. minimum_acceptable_ok = 1e-10 All data points that aren't definitely not OK will go into the binning. minimum_acceptable_ok = 0 All data points will be included in the bin. minimum_points_per_bin : float If you're creating bins that are smaller than those in the original dataset, it's possible to end up with bins that effectively contain fewer than one original datapoint (in the sense that the contribution of one original datapoint might be split across multiple new bins). By default, we allow this behavior with minimum_points_per_bin=0, but you can limit your result to only bins that contain one or more original datapoints with minimum_points_per_bin=1. trim : bool Should any wavelengths or columns that end up as entirely nan be trimmed out of the result? (default = True)

Returns#

binned : Rainbow The binned Rainbow.

Source code in chromatic/rainbows/actions/binning.py
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
def bin(
    self,
    dt=None,
    time=None,
    time_edges=None,
    ntimes=None,
    R=None,
    dw=None,
    wavelength=None,
    wavelength_edges=None,
    nwavelengths=None,
    minimum_acceptable_ok=1,
    minimum_points_per_bin=None,
    trim=True,
):
    """
    Bin in wavelength and/or time.

    Average together some number of adjacent data points,
    in wavelength and/or time. For well-behaved data where
    data points are independent from each other, binning down
    by N data points should decrease the noise per bin by
    approximately 1/sqrt(N), making it easier to see subtle
    signals. To bin data points together, data are combined
    using inverse-variance weighting through interpolation
    of cumulative distributions, in an attempt to make sure
    that flux integrals between limits are maintained.

    Currently, the inverse-variance weighting is most reliable
    only for datasets that have been normalized to be close
    to 1. We still need to do a little work to make sure
    it works well on unnormalized datasets with dramatically
    non-uniform uncertainties.

    By default, time binning happens before wavelength binning.
    To control the order, use separate calls to `.bin()`.

    The time-setting order of precendence is
    [`time_edges`, `time`, `dt`, `ntimes`]
    The first will be used, and others will be ignored.

    The wavelength-setting order of precendence is
    [`wavelength_edges`, `wavelength`, `dw`, `R`, `nwavelengths`]
    The first will be used, and others will be ignored.


    Parameters
    ----------
    dt : Quantity
        The d(time) bin size for creating a grid
        that is uniform in linear space.
    time : Quantity
        An array of times, if you just want to give
        it an entirely custom array.
        The widths of the bins will be guessed from the centers
        (well, if the spacing is uniform constant; pretty well
        but not perfectly otherwise).
    time_edges : Quantity
        An array of times for the edges of bins,
        if you just want to give an entirely custom array.
        The bins will span `time_edges[:-1]` to
        `time_edges[1:]`, so the resulting binned
        Rainbow will have `len(time_edges) - 1`
        time bins associated with it.
    ntimes : int
        A fixed number of time to bin together.
        Binning will start from the 0th element of the
        starting times; if you want to start from
        a different index, trim before binning.
    R : float
        The spectral resolution for creating a grid
        that is uniform in logarithmic space.
    dw : Quantity
        The d(wavelength) bin size for creating a grid
        that is uniform in linear space.
    wavelength : Quantity
        An array of wavelengths for the centers of bins,
        if you just want to give an entirely custom array.
        The widths of the bins will be guessed from the centers
        (well, if the spacing is uniform constant; pretty well
        but not perfectly otherwise).
    wavelength_edges : Quantity
        An array of wavelengths for the edges of bins,
        if you just want to give an entirely custom array.
        The bins will span `wavelength_edges[:-1]` to
        `wavelength_edges[1:]`, so the resulting binned
        Rainbow will have `len(wavelength_edges) - 1`
        wavelength bins associated with it.
    nwavelengths : int
        A fixed number of wavelengths to bin together.
        Binning will start from the 0th element of the
        starting wavelengths; if you want to start from
        a different index, trim before binning.
    minimum_acceptable_ok : float
        The numbers in the `.ok` attribute express "how OK?" each
        data point is, ranging from 0 (not OK) to 1 (super OK).
        In most cases, `.ok` will be binary, but there may be times
        where it's intermediate (for example, if a bin was created
        from some data that were not OK and some that were).
        The `minimum_acceptable_ok` parameter allows you to specify what
        level of OK-ness for a point to go into the binning.
        Reasonable options may include:
            minimum_acceptable_ok = 1
                  Only data points that are perfectly OK
                  will go into the binning.
            minimum_acceptable_ok = 1e-10
                  All data points that aren't definitely not OK
                  will go into the binning.
            minimum_acceptable_ok = 0
                  All data points will be included in the bin.
    minimum_points_per_bin : float
        If you're creating bins that are smaller than those in
        the original dataset, it's possible to end up with bins
        that effectively contain fewer than one original datapoint
        (in the sense that the contribution of one original datapoint
        might be split across multiple new bins). By default,
        we allow this behavior with `minimum_points_per_bin=0`, but you can
        limit your result to only bins that contain one or more
        original datapoints with `minimum_points_per_bin=1`.
    trim : bool
        Should any wavelengths or columns that end up
        as entirely nan be trimmed out of the result?
        (default = True)

    Returns
    -------
    binned : Rainbow
        The binned `Rainbow`.
    """

    # bin first in time
    binned_in_time = self.bin_in_time(
        dt=dt,
        time=time,
        time_edges=time_edges,
        ntimes=ntimes,
        minimum_acceptable_ok=minimum_acceptable_ok,
        minimum_points_per_bin=minimum_points_per_bin,
        trim=trim,
    )

    # then bin in wavelength
    binned = binned_in_time.bin_in_wavelength(
        R=R,
        dw=dw,
        wavelength=wavelength,
        wavelength_edges=wavelength_edges,
        nwavelengths=nwavelengths,
        minimum_acceptable_ok=minimum_acceptable_ok,
        minimum_points_per_bin=minimum_points_per_bin,
        trim=trim,
    )

    # return the binned object
    return binned

bin_in_time(self, dt=None, time=None, time_edges=None, ntimes=None, minimum_acceptable_ok=1, minimum_points_per_bin=None, trim=True) #

Bin in time.

The time-setting order of precendence is [time_edges, time, dt, ntimes] The first will be used, and others will be ignored.

Parameters#

dt : Quantity The d(time) bin size for creating a grid that is uniform in linear space. time : Quantity An array of times, if you just want to give it an entirely custom array. The widths of the bins will be guessed from the centers (well, if the spacing is uniform constant; pretty well but not perfectly otherwise). time_edges : Quantity An array of times for the edges of bins, if you just want to give an entirely custom array. The bins will span time_edges[:-1] to time_edges[1:], so the resulting binned Rainbow will have len(time_edges) - 1 time bins associated with it. ntimes : int A fixed number of time to bin together. Binning will start from the 0th element of the starting times; if you want to start from a different index, trim before binning. minimum_acceptable_ok : float The numbers in the .ok attribute express "how OK?" each data point is, ranging from 0 (not OK) to 1 (super OK). In most cases, .ok will be binary, but there may be times where it's intermediate (for example, if a bin was created from some data that were not OK and some that were). The minimum_acceptable_ok parameter allows you to specify what level of OK-ness for a point to go into the binning. Reasonable options may include: minimum_acceptable_ok = 1 Only data points that are perfectly OK will go into the binning. minimum_acceptable_ok = 1e-10 All data points that aren't definitely not OK will go into the binning. minimum_acceptable_ok = 0 All data points will be included in the bin. minimum_points_per_bin : float If you're creating bins that are smaller than those in the original dataset, it's possible to end up with bins that effectively contain fewer than one original datapoint (in the sense that the contribution of one original datapoint might be split across multiple new bins). By default, we allow this behavior with minimum_points_per_bin=0, but you can limit your result to only bins that contain one or more original datapoints with minimum_points_per_bin=1. trim : bool Should any wavelengths or columns that end up as entirely nan be trimmed out of the result? (default = True)

Returns#

binned : Rainbow The binned Rainbow.

Source code in chromatic/rainbows/actions/binning.py
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
def bin_in_time(
    self,
    dt=None,
    time=None,
    time_edges=None,
    ntimes=None,
    minimum_acceptable_ok=1,
    minimum_points_per_bin=None,
    trim=True,
):
    """
    Bin in time.

    The time-setting order of precendence is
    [`time_edges`, `time`, `dt`, `ntimes`]
    The first will be used, and others will be ignored.


    Parameters
    ----------
    dt : Quantity
        The d(time) bin size for creating a grid
        that is uniform in linear space.
    time : Quantity
        An array of times, if you just want to give
        it an entirely custom array.
        The widths of the bins will be guessed from the centers
        (well, if the spacing is uniform constant; pretty well
        but not perfectly otherwise).
    time_edges : Quantity
        An array of times for the edges of bins,
        if you just want to give an entirely custom array.
        The bins will span `time_edges[:-1]` to
        `time_edges[1:]`, so the resulting binned
        `Rainbow` will have `len(time_edges) - 1`
        time bins associated with it.
    ntimes : int
        A fixed number of time to bin together.
        Binning will start from the 0th element of the
        starting times; if you want to start from
        a different index, trim before binning.
    minimum_acceptable_ok : float
        The numbers in the `.ok` attribute express "how OK?" each
        data point is, ranging from 0 (not OK) to 1 (super OK).
        In most cases, `.ok` will be binary, but there may be times
        where it's intermediate (for example, if a bin was created
        from some data that were not OK and some that were).
        The `minimum_acceptable_ok` parameter allows you to specify what
        level of OK-ness for a point to go into the binning.
        Reasonable options may include:
            minimum_acceptable_ok = 1
                  Only data points that are perfectly OK
                  will go into the binning.
            minimum_acceptable_ok = 1e-10
                  All data points that aren't definitely not OK
                  will go into the binning.
            minimum_acceptable_ok = 0
                  All data points will be included in the bin.
    minimum_points_per_bin : float
        If you're creating bins that are smaller than those in
        the original dataset, it's possible to end up with bins
        that effectively contain fewer than one original datapoint
        (in the sense that the contribution of one original datapoint
        might be split across multiple new bins). By default,
        we allow this behavior with `minimum_points_per_bin=0`, but you can
        limit your result to only bins that contain one or more
        original datapoints with `minimum_points_per_bin=1`.
    trim : bool
        Should any wavelengths or columns that end up
        as entirely nan be trimmed out of the result?
        (default = True)

    Returns
    -------
    binned : Rainbow
        The binned `Rainbow`.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("bin_in_time", locals())

    # if no bin information is provided, don't bin
    if np.all([x is None for x in [dt, time, time_edges, ntimes]]):
        return self

    # set up binning parameters
    binkw = dict(weighting="inversevariance", drop_nans=False)

    # [`time_edges`, `time`, `dt`, `ntimes`]
    if time_edges is not None:
        binkw["newx_edges"] = time_edges
    elif time is not None:
        binkw["newx"] = time
    elif dt is not None:
        binkw["dx"] = dt
    elif ntimes is not None:
        binkw["nx"] = ntimes

    # create a new, empty Rainbow
    new = self._create_copy()

    # populate the wavelength information
    new.wavelike = {**self.wavelike}
    new.metadata["wscale"] = self.wscale

    # bin the time-like variables
    # Technically, we should include uncertainties here too,
    # so that times/wavelengths are weighted more toward
    # inputs with higher flux weights (e.g. smaller variance),
    # but that will make non-uniform grids that will be
    # really hard to deal with.
    new.timelike = {}
    for k in self.timelike:
        binned = bintogrid(x=self.time, y=self.timelike[k], unc=None, **binkw)
        new.timelike[k] = binned["y"]
    new.timelike["time"] = binned["x"]
    new.timelike["time_lower"] = binned["x_edge_lower"]
    new.timelike["time_upper"] = binned["x_edge_upper"]
    new.timelike["unbinned_times_per_binned_time"] = binned["N_unbinned/N_binned"]

    # bin the flux-like variables
    # TODO (add more careful treatment of uncertainty + DQ)
    # TODO (think about cleverer bintogrid for 2D arrays?)
    new.fluxlike = {}
    ok = self.ok
    # loop through wavelengths
    for w in tqdm(np.arange(new.nwave), leave=False):

        '''
        if k == "uncertainty":
            cheerfully_suggest(
                """
            Uncertainties and/or data quality flags might
            not be handled absolutely perfectly yet...
            """
            )'''

        for k in self.fluxlike:
            # mask out "bad" wavelengths
            time_is_bad = ok[w, :] < minimum_acceptable_ok
            if (self.uncertainty is None) or np.all(self.uncertainty == 0):
                uncertainty_for_binning = np.ones(self.ntime).astype(bool)
            elif k in self._keys_that_get_uncertainty_weighting:
                uncertainty_for_binning = self.uncertainty[w, :] * 1
            else:
                uncertainty_for_binning = np.ones(self.ntime).astype(bool)

            if k != "ok":
                uncertainty_for_binning[time_is_bad] = np.inf

            # bin the quantities for this wavelength
            binned = bintogrid(
                x=self.time[:],
                y=self.fluxlike[k][w, :],
                unc=uncertainty_for_binning,
                **binkw,
            )

            # if necessary, create a new fluxlike array
            if k not in new.fluxlike:
                new_shape = (new.nwave, new.ntime)
                new.fluxlike[k] = np.zeros(new_shape)
                if isinstance(self.fluxlike[k], u.Quantity):
                    new.fluxlike[k] *= self.fluxlike[k].unit

            # store the binned array in the appropriate place
            if k == "uncertainty":
                # uncertainties are usually standard error on the mean
                new.fluxlike[k][w, :] = binned["uncertainty"]
            else:
                # note: all quantities are weighted the same as flux (probably inversevariance)
                new.fluxlike[k][w, :] = binned["y"]

    if (new.nwave == 0) or (new.ntime == 0):
        message = f"""
        You tried to bin {self} to {new}.

        After accounting for `minimum_acceptable_ok > {minimum_acceptable_ok}`,
        all new bins would end up with no usable data points.
        Please (a) make sure your input `Rainbow` has at least
        one wavelength and time, (b) check `.ok` accurately expresses
        which data you think are usable, (c) change the `minimum_acceptable_ok`
        keyword for `.bin` to a smaller value, and/or (d) try larger bins.
        """
        cheerfully_suggest(message)
        raise RuntimeError("No good data to bin! (see above)")

    # make sure dictionaries are on the up and up
    new._validate_core_dictionaries()

    # figure out the scales, after binning
    new._guess_wscale()
    new._guess_tscale()

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # deal with bins that are smaller than original
    N = new.timelike["unbinned_times_per_binned_time"]
    if minimum_points_per_bin is None:
        _warn_about_weird_binning(N, "time")
    else:
        ok = new.timelike.get("ok", np.ones(new.ntime, bool))
        new.timelike["ok"] = ok * (N >= minimum_points_per_bin)

    # return the new Rainbow (with trimming if necessary)
    if trim:
        return new.trim_times(minimum_acceptable_ok=minimum_acceptable_ok)
    else:
        return new

bin_in_wavelength(self, R=None, dw=None, wavelength=None, wavelength_edges=None, nwavelengths=None, minimum_acceptable_ok=1, minimum_points_per_bin=None, trim=True, starting_wavelengths='1D') #

Bin in wavelength.

The wavelength-setting order of precendence is [wavelength_edges, wavelength, dw, R, nwavelengths] The first will be used, and others will be ignored.

Parameters#

R : float The spectral resolution for creating a grid that is uniform in logarithmic space. dw : Quantity The d(wavelength) bin size for creating a grid that is uniform in linear space. wavelength : Quantity An array of wavelength centers, if you just want to give it an entirely custom array. The widths of the bins will be guessed from the centers. It will do a good job if the widths are constant, but don't 100% trust it otherwise. wavelength_edges : Quantity An array of wavelengths for the edges of bins, if you just want to give an entirely custom array. The bins will span wavelength_edges[:-1] to wavelength_edges[1:], so the resulting binned Rainbow will have len(wavelength_edges) - 1 wavelength bins associated with it. nwavelengths : int A fixed number of wavelengths to bin together. Binning will start from the 0th element of the starting wavelengths; if you want to start from a different index, trim before binning. minimum_acceptable_ok : float The numbers in the .ok attribute express "how OK?" each data point is, ranging from 0 (not OK) to 1 (super OK). In most cases, .ok will be binary, but there may be times where it's intermediate (for example, if a bin was created from some data that were not OK and some that were). The minimum_acceptable_ok parameter allows you to specify what level of OK-ness for a point to go into the binning. Reasonable options may include: minimum_acceptable_ok = 1 Only data points that are perfectly OK will go into the binning. minimum_acceptable_ok = 1e-10 All data points that aren't definitely not OK will go into the binning. minimum_acceptable_ok = 0 All data points will be included in the bin. minimum_points_per_bin : float If you're creating bins that are smaller than those in the original dataset, it's possible to end up with bins that effectively contain fewer than one original datapoint (in the sense that the contribution of one original datapoint might be split across multiple new bins). By default, we allow this behavior with minimum_points_per_bin=0, but you can limit your result to only bins that contain one or more original datapoints with minimum_points_per_bin=1. trim : bool Should any wavelengths or columns that end up as entirely nan be trimmed out of the result? (default = True) starting_wavelengths : str What wavelengths should be used as the starting value from which we will be binning? Options include: '1D' = (default) the shared 1D wavelengths for all times stored in .wavelike['wavelength'] '2D' = (used only by align_wavelengths) the per-time 2D array stored in .fluxlike['wavelength'] [Most users probably don't need to change this from default.]

Returns#

binned : Rainbow The binned Rainbow.

Source code in chromatic/rainbows/actions/binning.py
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
def bin_in_wavelength(
    self,
    R=None,
    dw=None,
    wavelength=None,
    wavelength_edges=None,
    nwavelengths=None,
    minimum_acceptable_ok=1,
    minimum_points_per_bin=None,
    trim=True,
    starting_wavelengths="1D",
):
    """
    Bin in wavelength.

    The wavelength-setting order of precendence is
    [`wavelength_edges`, `wavelength`, `dw`, `R`, `nwavelengths`]
    The first will be used, and others will be ignored.

    Parameters
    ----------
    R : float
        The spectral resolution for creating a grid
        that is uniform in logarithmic space.
    dw : Quantity
        The d(wavelength) bin size for creating a grid
        that is uniform in linear space.
    wavelength : Quantity
        An array of wavelength centers, if you just want to give
        it an entirely custom array. The widths of the bins
        will be guessed from the centers. It will do a good
        job if the widths are constant, but don't 100% trust
        it otherwise.
    wavelength_edges : Quantity
        An array of wavelengths for the edges of bins,
        if you just want to give an entirely custom array.
        The bins will span `wavelength_edges[:-1]` to
        `wavelength_edges[1:]`, so the resulting binned
        `Rainbow` will have `len(wavelength_edges) - 1`
        wavelength bins associated with it.
    nwavelengths : int
        A fixed number of wavelengths to bin together.
        Binning will start from the 0th element of the
        starting wavelengths; if you want to start from
        a different index, trim before binning.
    minimum_acceptable_ok : float
        The numbers in the `.ok` attribute express "how OK?" each
        data point is, ranging from 0 (not OK) to 1 (super OK).
        In most cases, `.ok` will be binary, but there may be times
        where it's intermediate (for example, if a bin was created
        from some data that were not OK and some that were).
        The `minimum_acceptable_ok` parameter allows you to specify what
        level of OK-ness for a point to go into the binning.
        Reasonable options may include:
            minimum_acceptable_ok = 1
                  Only data points that are perfectly OK
                  will go into the binning.
            minimum_acceptable_ok = 1e-10
                  All data points that aren't definitely not OK
                  will go into the binning.
            minimum_acceptable_ok = 0
                  All data points will be included in the bin.
    minimum_points_per_bin : float
        If you're creating bins that are smaller than those in
        the original dataset, it's possible to end up with bins
        that effectively contain fewer than one original datapoint
        (in the sense that the contribution of one original datapoint
        might be split across multiple new bins). By default,
        we allow this behavior with `minimum_points_per_bin=0`, but you can
        limit your result to only bins that contain one or more
        original datapoints with `minimum_points_per_bin=1`.
    trim : bool
        Should any wavelengths or columns that end up
        as entirely nan be trimmed out of the result?
        (default = True)
    starting_wavelengths : str
        What wavelengths should be used as the starting
        value from which we will be binning? Options include:
        '1D' = (default) the shared 1D wavelengths for all times
               stored in `.wavelike['wavelength']`
        '2D' = (used only by `align_wavelengths`) the per-time 2D array
               stored in `.fluxlike['wavelength']`
        [Most users probably don't need to change this from default.]

    Returns
    -------
    binned : Rainbow
        The binned `Rainbow`.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("bin_in_wavelength", locals())

    # if no bin information is provided, don't bin
    if (
        (wavelength is None)
        and (wavelength_edges is None)
        and (nwavelengths is None)
        and (dw is None)
        and (R is None)
    ):
        return self

    if (
        (self._is_probably_normalized() == False)
        and (self.uncertainty is not None)
        and np.any(self.uncertainty != 0)
    ):

        cheerfully_suggest(
            f"""
        It looks like you're trying to bin in wavelength for a
        `Rainbow` object that might not be normalized. In the
        current version of `chromatic`, binning before normalizing
        might give inaccurate results if the typical uncertainty
        varies strongly with wavelength.

        Please consider normalizing first, for example with
        `rainbow.normalize().bin(...)`
        so that all uncertainties will effectively be relative,
        and the inverse variance weighting used for binning
        wavelengths together will give more reasonable answers.

        If you really need to bin before normalizing, please submit
        an Issue at github.com/zkbt/chromatic/, and we'll try to
        prioritize implementing a statistically sound solution as
        soon as possible!
        """
        )

    # set up binning parameters
    binkw = dict(weighting="inversevariance", drop_nans=False)

    # [`wavelength_edges`, `wavelength`, `dw`, `R`, `nwavelengths`]
    if wavelength_edges is not None:
        binning_function = bintogrid
        binkw["newx_edges"] = wavelength_edges
    elif wavelength is not None:
        binning_function = bintogrid
        binkw["newx"] = wavelength
    elif dw is not None:
        binning_function = bintogrid
        binkw["dx"] = dw
    elif R is not None:
        binning_function = bintoR
        binkw["R"] = R
    elif nwavelengths is not None:
        binning_function = bintogrid
        binkw["nx"] = nwavelengths

    # create a new, empty Rainbow
    new = self._create_copy()

    # populate the time information
    new.timelike = {**self.timelike}

    # bin the time-like variables
    # TODO (add more careful treatment of uncertainty + DQ)
    new.wavelike = {}
    for k in self.wavelike:
        binned = binning_function(
            x=self.wavelike["wavelength"], y=self.wavelike[k], unc=None, **binkw
        )
        new.wavelike[k] = binned["y"]
    new.wavelike["wavelength"] = binned["x"]
    new.wavelike["wavelength_lower"] = binned["x_edge_lower"]
    new.wavelike["wavelength_upper"] = binned["x_edge_upper"]
    new.wavelike["unbinned_wavelengths_per_binned_wavelength"] = binned[
        "N_unbinned/N_binned"
    ]

    # bin the flux-like variables
    # TODO (add more careful treatment of uncertainty + DQ)
    # TODO (think about cleverer bintogrid for 2D arrays)
    new.fluxlike = {}

    # get a fluxlike array of what's OK to include in the bins
    ok = self.ok
    for t in tqdm(np.arange(new.ntime), leave=False):

        for k in self.fluxlike:

            # mask out "bad" wavelengths
            wavelength_is_bad = ok[:, t] < minimum_acceptable_ok

            if (self.uncertainty is None) or np.all(self.uncertainty == 0):
                uncertainty_for_binning = np.ones(self.nwave).astype(bool)
            elif k in self._keys_that_get_uncertainty_weighting:
                uncertainty_for_binning = self.uncertainty[:, t] * 1
            else:
                uncertainty_for_binning = np.ones(self.nwave).astype(bool)
            if k != "ok":
                uncertainty_for_binning[wavelength_is_bad] = np.inf

            if starting_wavelengths.upper() == "1D":
                w = self.wavelike["wavelength"][:]
            elif starting_wavelengths.upper() == "2D":
                w = self.fluxlike["wavelength_2d"][:, t]
            # bin the quantities for this time
            binned = binning_function(
                x=w,
                y=self.fluxlike[k][:, t] * 1,
                unc=uncertainty_for_binning,
                **binkw,
            )

            # if necessary, create a new fluxlike array
            if k not in new.fluxlike:
                new_shape = (new.nwave, new.ntime)
                new.fluxlike[k] = np.zeros(new_shape)
                if isinstance(self.fluxlike[k], u.Quantity):
                    new.fluxlike[k] *= self.fluxlike[k].unit

            # store the binned array in the appropriate place
            if k == "uncertainty":
                # uncertainties are usually standard error on the mean
                new.fluxlike[k][:, t] = binned["uncertainty"]
            else:
                # note: all quantities are weighted the same as flux (probably inversevariance)
                new.fluxlike[k][:, t] = binned["y"]

    if (new.nwave == 0) or (new.ntime == 0):
        message = f"""
        You tried to bin {self} to {new}.

        After accounting for `minimum_acceptable_ok > {minimum_acceptable_ok}`,
        all new bins would end up with no usable data points.
        Please (a) make sure your input `Rainbow` has at least
        one wavelength and time, (b) check `.ok` accurately expresses
        which data you think are usable, (c) change the `minimum_acceptable_ok`
        keyword for `.bin` to a smaller value, and/or (d) try larger bins.
        """
        cheerfully_suggest(message)
        raise RuntimeError("No good data to bin! (see above)")

    # make sure dictionaries are on the up and up
    new._validate_core_dictionaries()

    # figure out the scales, after binning
    new._guess_wscale()
    new._guess_tscale()

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # deal with bins that are smaller than original
    N = new.wavelike["unbinned_wavelengths_per_binned_wavelength"]
    if minimum_points_per_bin is None:
        _warn_about_weird_binning(N, "wavelength")
    else:
        ok = new.wavelike.get("ok", np.ones(new.nwave, bool))
        new.wavelike["ok"] = ok * (N >= minimum_points_per_bin)

    # return the new Rainbow (with trimming if necessary)
    if trim:
        return new.trim_wavelengths(minimum_acceptable_ok=minimum_acceptable_ok)
    else:
        return new

get_average_lightcurve_as_rainbow(self) #

Produce a wavelength-integrated light curve.

The average across wavelengths is uncertainty-weighted.

This uses bin, which is a horribly slow way of doing what is fundamentally a very simply array calculation, because we don't need to deal with partial pixels.

Returns#

lc : Rainbow A Rainbow object with just one wavelength.

Source code in chromatic/rainbows/actions/binning.py
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
def get_average_lightcurve_as_rainbow(self):
    """
    Produce a wavelength-integrated light curve.

    The average across wavelengths is uncertainty-weighted.

    This uses `bin`, which is a horribly slow way of doing what is
    fundamentally a very simply array calculation, because we
    don't need to deal with partial pixels.

    Returns
    -------
    lc : Rainbow
        A `Rainbow` object with just one wavelength.
    """
    h = self._create_history_entry("get_average_spectrum_as_rainbow", locals())

    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        new = self.bin(nwavelengths=self.nwave, trim=False)

    new._record_history_entry(h)
    return new

get_average_spectrum_as_rainbow(self) #

Produce a time-integrated spectrum.

The average across times is uncertainty-weighted.

This uses bin, which is a horribly slow way of doing what is fundamentally a very simply array calculation, because we don't need to deal with partial pixels.

Returns#

lc : Rainbow A Rainbow object with just one time.

Source code in chromatic/rainbows/actions/binning.py
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
def get_average_spectrum_as_rainbow(self):
    """
    Produce a time-integrated spectrum.

    The average across times is uncertainty-weighted.

    This uses `bin`, which is a horribly slow way of doing what is
    fundamentally a very simply array calculation, because we
    don't need to deal with partial pixels.

    Returns
    -------
    lc : Rainbow
        A `Rainbow` object with just one time.
    """
    h = self._create_history_entry("get_average_spectrum_as_rainbow", locals())

    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        new = self.bin(ntimes=self.ntime, trim=False)

    new._record_history_entry(h)
    return new

Compare this Rainbow to others.

(still in development) This connects the current Rainbow to a collection of other Rainbow objects, which can then be visualized side-by-side in a uniform way.

Parameters#

rainbows : list A list containing one or more other Rainbow objects. If you only want to compare with one other Rainbow, supply it in a 1-element list like .compare([other])

Returns#

rainbow : MultiRainbow A MultiRainbow comparison object including all input Rainbows

Source code in chromatic/rainbows/actions/compare.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def compare(self, rainbows):
    """
    Compare this `Rainbow` to others.

    (still in development) This connects the current `Rainbow`
    to a collection of other `Rainbow` objects, which can then
    be visualized side-by-side in a uniform way.

    Parameters
    ----------
    rainbows : list
        A list containing one or more other `Rainbow` objects.
        If you only want to compare with one other `Rainbow`,
        supply it in a 1-element list like `.compare([other])`

    Returns
    -------
    rainbow : MultiRainbow
        A `MultiRainbow` comparison object including all input `Rainbow`s
    """
    try:
        rainbows.remove(self)
    except (ValueError, IndexError):
        pass
    return compare_rainbows([self] + rainbows)

Flag outliers as not ok.

This examines the flux array, identifies significant outliers, and marks them 0 in the ok array. The default procedure is to use a median filter to remove temporal trends (remove_trends), inflate the uncertainties based on the median-absolute-deviation scatter (inflate_uncertainty), and call points outliers if they deviate by more than a certain number of sigma (how_many_sigma) from the median-filtered level.

The returned Rainbow object should be identical to the input one, except for the possibility that some elements in ok array will have been marked as zero. (The filtering or inflation are not applied to the returned object.)

Parameters#

how_many_sigma : float, optional Standard deviations (sigmas) allowed for individual data points before they are flagged as outliers. remove_trends : bool, optional Should we remove trends from the flux data before trying to look for outliers? inflate_uncertainty : bool, optional Should uncertainties per wavelength be inflated to match the (MAD-based) standard deviation of the data?

Returns#

rainbow : Rainbow A new Rainbow object with the outliers flagged as 0 in .ok

Source code in chromatic/rainbows/actions/flag_outliers.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
def flag_outliers(self, how_many_sigma=5, remove_trends=True, inflate_uncertainty=True):
    """
    Flag outliers as not `ok`.

    This examines the flux array, identifies significant outliers,
    and marks them 0 in the `ok` array. The default procedure is to use
    a median filter to remove temporal trends (`remove_trends`),
    inflate the uncertainties based on the median-absolute-deviation
    scatter (`inflate_uncertainty`), and call points outliers if they
    deviate by more than a certain number of sigma (`how_many_sigma`)
    from the median-filtered level.

    The returned `Rainbow` object should be identical to the input
    one, except for the possibility that some elements in `ok` array
    will have been marked as zero. (The filtering or inflation are
    not applied to the returned object.)

    Parameters
    ----------
    how_many_sigma : float, optional
        Standard deviations (sigmas) allowed for individual data
        points before they are flagged as outliers.
    remove_trends : bool, optional
        Should we remove trends from the flux data before
        trying to look for outliers?
    inflate_uncertainty : bool, optional
        Should uncertainties per wavelength be inflated to
        match the (MAD-based) standard deviation of the data?

    Returns
    -------
    rainbow : Rainbow
        A new Rainbow object with the outliers flagged as 0 in `.ok`
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("flag_outliers", locals())

    # create a copy of the existing rainbow
    new = self._create_copy()

    # how many outliers are expected from noise alone
    outliers_expected_from_normal_distribution = erfc(how_many_sigma) * self.nflux * 2
    if outliers_expected_from_normal_distribution >= 1:
        cheerfully_suggest(
            f"""
        When drawing from a normal distribution, an expected {outliers_expected_from_normal_distribution:.1f} out of
        the total {self.nflux} datapoints in {self} would be marked
        as a >{how_many_sigma} sigma outlier.

        If you don't want to accidentally clip legitimate data points that
        might have arisen merely by chance, please consider setting the
        outlier flagging threshold (`sigma=`) to a larger value.
        """
        )

    # create a trend-filtered object
    if remove_trends:
        filtered = new.remove_trends(method="median_filter", size=(3, 5))
    else:
        filtered = new._create_copy()

    # update the uncertainties, if need be
    if np.all(filtered.uncertainty == 0):
        filtered.uncertainty = (
            np.ones(filtered.shape)
            * filtered.get_measured_scatter(method="MAD")[:, np.newaxis]
        )
        inflate_uncertainty = False

    # inflate the per-wavelength uncertainties, as needed
    if inflate_uncertainty:
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")
            inflated = filtered.inflate_uncertainty(method="MAD", remove_trends=True)
    else:
        inflated = filtered

    # decide which points are outliers
    is_outlier = np.abs(inflated.flux - 1) > how_many_sigma * inflated.uncertainty

    # update the output object
    new.fluxlike["flagged_as_outlier"] = is_outlier
    new.ok = new.ok * (is_outlier == False)

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    return new

Fold this Rainbow to a period and reference epoch.

This changes the times from some original time into a phased time, for example the time within an orbital period, relative to the time of mid-transit. This is mostly a convenience function for plotting data relative to mid-transit and/or trimming data based on orbital phase.

Parameters#

period : Quantity The orbital period of the planet (with astropy units of time). t0 : Quantity Any mid-transit epoch (with astropy units of time). event : str A description of the event that happens periodically. For example, you might want to switch this to 'Mid-Eclipse' (as well as offsetting the t0 by the appropriate amount relative to transit). This description may be used in plot labels.

Returns#

folded : Rainbow The folded Rainbow.

Source code in chromatic/rainbows/actions/fold.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
def fold(self, period=None, t0=None, event="Mid-Transit"):
    """
    Fold this `Rainbow` to a period and reference epoch.

    This changes the times from some original time into
    a phased time, for example the time within an orbital
    period, relative to the time of mid-transit. This
    is mostly a convenience function for plotting data
    relative to mid-transit and/or trimming data based
    on orbital phase.

    Parameters
    ----------
    period : Quantity
        The orbital period of the planet (with astropy units of time).
    t0 : Quantity
        Any mid-transit epoch (with astropy units of time).
    event : str
        A description of the event that happens periodically.
        For example, you might want to switch this to
        'Mid-Eclipse' (as well as offsetting the `t0` by the
        appropriate amount relative to transit). This description
        may be used in plot labels.

    Returns
    -------
    folded : Rainbow
        The folded `Rainbow`.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("fold", locals())

    # warn
    if (period is None) or (t0 is None):
        message = """
        Folding to a transit period requires both
        `period` and `t0` be specified. Please try again.
        """
        cheerfully_suggest(message)
        return self

    # create a copy of the existing rainbow
    new = self._create_copy()

    # calculate predicted time from transit
    new.time = (((self.time - t0) + 0.5 * period) % period) - 0.5 * period
    # (the nudge by 0.5 period is to center on -period/2 to period/2)

    # change the default time label
    new.metadata["time_label"] = f"Time from {event}"

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    return new

Inflate uncertainties to match observed scatter.

This is a quick and approximate tool for inflating the flux uncertainties in a Rainbow to match the observed scatter. With defaults, this will estimate the scatter using a robust median-absolute-deviation estimate of the standard deviation (method='MAD'), applied to time-series from which temporal trends have been removed (remove_trends=True), and inflate the uncertainties on a per-wavelength basis. The trend removal, by default by subtracting off local medians (remove_trends_method='median_filter'), will squash many types of both astrophysical and systematic trends, so this function should be used with caution in applicants where precise and reliable uncertainties are needed.

Parameters#

method : string What method to use to obtain measured scatter. Current options are 'MAD', 'standard-deviation'. remove_trends : bool Should we remove trends before estimating by how much we need to inflate the uncertainties? remove_trends_method : str What method should be used to remove trends? See .remove_trends for options. remove_trends_kw : dict What keyword arguments should be passed to remove_trends? minimum_inflate_ratio : float, optional the minimum inflate_ratio that can be. We don't want people to deflate uncertainty unless a very specific case of unstable pipeline output.

Returns#

removed : Rainbow The Rainbow with estimated signals removed.

Source code in chromatic/rainbows/actions/inflate_uncertainty.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def inflate_uncertainty(
    self,
    method="MAD",
    remove_trends=True,
    remove_trends_method="median_filter",
    remove_trends_kw={},
    minimum_inflate_ratio=1.0,
):
    """
    Inflate uncertainties to match observed scatter.

    This is a quick and approximate tool for inflating
    the flux uncertainties in a `Rainbow` to match the
    observed scatter. With defaults, this will estimate
    the scatter using a robust median-absolute-deviation
    estimate of the standard deviation (`method='MAD'`),
    applied to time-series from which temporal trends
    have been removed (`remove_trends=True`), and inflate
    the uncertainties on a per-wavelength basis. The trend
    removal, by default by subtracting off local medians
    (`remove_trends_method='median_filter'`), will squash
    many types of both astrophysical and systematic trends,
    so this function should be used with caution in
    applicants where precise and reliable uncertainties
    are needed.

    Parameters
    ----------
    method : string
        What method to use to obtain measured scatter.
        Current options are 'MAD', 'standard-deviation'.
    remove_trends : bool
        Should we remove trends before estimating by how
        much we need to inflate the uncertainties?
    remove_trends_method : str
        What method should be used to remove trends?
        See `.remove_trends` for options.
    remove_trends_kw : dict
        What keyword arguments should be passed to `remove_trends`?
    minimum_inflate_ratio : float, optional
        the minimum inflate_ratio that can be. We don't want people
        to deflate uncertainty unless a very specific case of unstable
        pipeline output.

    Returns
    -------
    removed : Rainbow
        The Rainbow with estimated signals removed.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("inflate_uncertainty", locals())

    # create a new copy
    new = self._create_copy()

    # if desired, remove trends before estimating inflation factor
    if remove_trends:
        trend_removed = new.remove_trends(**remove_trends_kw)
    else:
        trend_removed = new

    # estimate the scatter
    measured_scatter = trend_removed.get_measured_scatter(
        method=method, minimum_acceptable_ok=1e-10
    )

    # get the expected uncertainty
    expected_uncertainty = trend_removed.get_expected_uncertainty()

    # calculate the necessary inflation ratio
    inflate_ratio = measured_scatter / expected_uncertainty

    # warn if there are some inflation ratios below minimum (usually = 1)
    if np.min(inflate_ratio) < minimum_inflate_ratio:
        cheerfully_suggest(
            f"""
        {np.sum(inflate_ratio < minimum_inflate_ratio)} uncertainty inflation ratios would be below
        the `minimum_inflate_ratio` of {minimum_inflate_ratio}, so they have not been changed.
        """
        )
        inflate_ratio = np.maximum(inflate_ratio, minimum_inflate_ratio)

    # store the inflation ratio
    new.wavelike["inflate_ratio"] = inflate_ratio

    # inflate the uncertainties
    new.uncertainty = new.uncertainty * inflate_ratio[:, np.newaxis]

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new Rainbow
    return new

Inject uncorrelated random noise into the .flux array.

This injects independent noise to each data point, drawn from either a Gaussian or Poisson distribution. If the inputs can be scalar, or they can be arrays that we will try to broadcast into the shape of the .flux array.

Parameters#

float, array, optional

The signal-to-noise per wavelength per time. For example, S/N=100 would mean that the uncertainty on the flux for each each wavelength-time data point will be 1%. If it is a scalar, then even point is the same. If it is an array with a fluxlike, wavelike, or timelike shape it will be broadcast appropriately.

number_of_photons : float, array, optional The number of photons expected to be recieved from the light source per wavelength and time. If it is a scalar, then even point is the same. If it is an array with a fluxlike, wavelike, or timelike shape it will be broadcast appropriately. If number_of_photons is set, then signal_to_noise will be ignored.

Returns#

rainbow : Rainbow A new Rainbow object with the noise injected.

Source code in chromatic/rainbows/actions/inject_noise.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def inject_noise(self, signal_to_noise=100, number_of_photons=None):
    """
    Inject uncorrelated random noise into the `.flux` array.

    This injects independent noise to each data point,
    drawn from either a Gaussian or Poisson distribution.
    If the inputs can be scalar, or they can be arrays
    that we will try to broadcast into the shape of the
    `.flux` array.

    Parameters
    ----------

    signal_to_noise : float, array, optional
        The signal-to-noise per wavelength per time.
        For example, S/N=100 would mean that the
        uncertainty on the flux for each each
        wavelength-time data point will be 1%.
        If it is a scalar, then even point is the same.
        If it is an array with a fluxlike, wavelike,
        or timelike shape it will be broadcast
        appropriately.
    number_of_photons : float, array, optional
        The number of photons expected to be recieved
        from the light source per wavelength and time.
        If it is a scalar, then even point is the same.
        If it is an array with a fluxlike, wavelike,
        or timelike shape it will be broadcast
        appropriately.
        If `number_of_photons` is set, then `signal_to_noise`
        will be ignored.

    Returns
    -------
    rainbow : Rainbow
        A new `Rainbow` object with the noise injected.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("inject_noise", locals())

    # create a copy of the existing Rainbow
    new = self._create_copy()

    # get the underlying model (or create one if needed)
    if "model" in new.fluxlike:
        model = new.fluxlike["model"]
    else:
        # kludge, do we really want to allow this?
        model = self.flux * 1
        new.fluxlike["model"] = model

    # setting up an if/else statement so that the user
    # can choose if they want to use their own
    # number_of_photons or the automatic signal_to_noise
    # noise generation
    if number_of_photons is not None:
        if u.Quantity(model).unit != u.Unit(""):
            raise ValueError(
                f"""
            We haven't yet implemented `number_of_photons` noise
            for models that have units associated with them. Sorry!
            """
            )

        mu = model * self._broadcast_to_fluxlike(number_of_photons)

        # convert the model to photons and store it
        new.fluxlike["model"] = mu * u.photon

        # inject a realization of noise using number_of_photons
        # (yields poisson distribution)
        new.fluxlike["flux"] = np.random.poisson(mu) * u.photon  # mu is the center

        # store number of photons as metadata
        new.metadata["number_of_photons"] = number_of_photons

        # calculate the uncertainty
        uncertainty = np.sqrt(mu)
        new.fluxlike["uncertainty"] = uncertainty * u.photon

        # append the history entry to the new Rainbow
        new._record_history_entry(h)

    else:
        # calculate the uncertainty with a fixed S/N
        uncertainty = model / self._broadcast_to_fluxlike(signal_to_noise)
        new.fluxlike["uncertainty"] = uncertainty

        # inject a realization of the noise
        if isinstance(model, u.Quantity):
            unit = model.unit
            loc = model.to_value(unit)
            scale = uncertainty.to_value(unit)
        else:
            unit = 1
            loc = model
            scale = uncertainty
        new.fluxlike["flux"] = np.random.normal(model, uncertainty) * unit

        # store S/N as metadata
        new.metadata["signal_to_noise"] = signal_to_noise

        # append the history entry to the new Rainbow
        new._record_history_entry(h)

    # return the new object
    return new

Inject some random outliers.

To approximately simulate cosmic rays or other rare weird outliers, this randomly injects outliers into a small fraction of pixels. For this simple method, outliers will have the same amplitude, either as a ratio above the per-data-point or as a fixed number (if no uncertainties exist).

Parameters#

fraction : float, optional The fraction of pixels that should get outliers. (default = 0.01) amplitude : float, optional If uncertainty > 0, how many sigma should outliers be? If uncertainty = 0, what number should be injected? (default = 10)

Returns#

rainbow : Rainbow A new Rainbow object with outliers injected.

Source code in chromatic/rainbows/actions/inject_outliers.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
def inject_outliers(self, fraction=0.01, amplitude=10):
    """
    Inject some random outliers.

    To approximately simulate cosmic rays or other
    rare weird outliers, this randomly injects
    outliers into a small fraction of pixels. For
    this simple method, outliers will have the same
    amplitude, either as a ratio above the per-data-point
    or as a fixed number (if no uncertainties exist).

    Parameters
    ----------
    fraction : float, optional
        The fraction of pixels that should get outliers.
        (default = 0.01)
    amplitude : float, optional
        If uncertainty > 0, how many sigma should outliers be?
        If uncertainty = 0, what number should be injected?
        (default = 10)

    Returns
    -------
    rainbow : Rainbow
        A new `Rainbow` object with outliers injected.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("inject_outliers", locals())

    # create a copy of the existing Rainbow
    new = self._create_copy()

    # pick some random pixels to inject outliers
    outliers = np.random.uniform(0, 1, self.shape) < fraction

    # inject outliers based on uncertainty if possible
    if np.any(self.uncertainty > 0):
        new.fluxlike["injected_outliers"] = outliers * amplitude * self.uncertainty
    else:
        new.fluxlike["injected_outliers"] = outliers * amplitude

    # modify the flux
    new.fluxlike["flux"] += new.fluxlike["injected_outliers"]

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new object
    return new

Inject a stellar spectrum into the flux.

This injects a constant stellar spectrum into all times in the Rainbow. Injection happens by multiplying the .model flux array, so for example a model that already has a transit in it will be scaled up to match the stellar spectrum in all wavelengths.

Parameters#

temperature : Quantity, optional Temperature, in K (with no astropy units attached). logg : float, optional Surface gravity log10[g/(cm/s**2)] (with no astropy units attached). metallicity : float, optional Metallicity log10[metals/solar] (with no astropy units attached). radius : Quantity, optional The radius of the star. distance : Quantity, optional The distance to the star. phoenix : bool, optional If True, use PHOENIX surface flux. If False, use Planck surface flux.

Returns#

rainbow : Rainbow A new Rainbow object with the spectrum injected.

Source code in chromatic/rainbows/actions/inject_spectrum.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
def inject_spectrum(
    self,
    temperature=5800 * u.K,
    logg=4.43,
    metallicity=0.0,
    radius=1 * u.Rsun,
    distance=10 * u.pc,
    phoenix=True,
):
    """
    Inject a stellar spectrum into the flux.

    This injects a constant stellar spectrum into
    all times in the `Rainbow`. Injection happens
    by multiplying the `.model` flux array, so for
    example a model that already has a transit in
    it will be scaled up to match the stellar spectrum
    in all wavelengths.

    Parameters
    ----------
    temperature : Quantity, optional
        Temperature, in K (with no astropy units attached).
    logg : float, optional
        Surface gravity log10[g/(cm/s**2)] (with no astropy units attached).
    metallicity : float, optional
        Metallicity log10[metals/solar] (with no astropy units attached).
    radius : Quantity, optional
        The radius of the star.
    distance : Quantity, optional
        The distance to the star.
    phoenix : bool, optional
        If `True`, use PHOENIX surface flux.
        If `False`, use Planck surface flux.

    Returns
    -------
    rainbow : Rainbow
        A new `Rainbow` object with the spectrum injected.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("inject_spectrum", locals())

    # create a copy of the existing Rainbow
    new = self._create_copy()

    # warn if maybe we shouldn't inject anything
    if np.all(u.Quantity(self.flux).value != 1):
        cheerfully_suggest(
            f"""
        None of the pre-existing flux values were 1,
        which hints at the possibility that there
        might already be a spectrum in them. Please
        watch out for weird units or values!
        """
        )

    if phoenix:
        f = get_phoenix_photons
    else:
        f = get_planck_photons

    # get the spectrum from the surface
    _, surface_flux = f(
        temperature=u.Quantity(temperature).value,
        logg=logg,
        metallicity=metallicity,
        wavelength=self.wavelength,
    )

    # get the received flux at Earth
    received_flux = surface_flux * (radius / distance).decompose() ** 2

    # do math with spectrum
    for k in ["flux", "model", "uncertainty"]:
        try:
            new.fluxlike[k] = self.get(k) * self._broadcast_to_fluxlike(received_flux)
        except KeyError:
            pass

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new object
    return new

Inject some (very cartoony) instrumental systematics.

Here's the basic procedure:

1) Generate some fake variables that vary either just with wavelength, just with time, or with both time and wavelength. Store these variables for later use. For example, these might represent an average x and y centroid of the trace on the detector (one for each time), or the background flux associated with each wavelength (one for each time and for each wavelength).

2) Generate a flux model as some function of those variables. In reality, we probably don't know the actual relationship between these inputs and the flux, but we can imagine one!

3) Inject the model flux into the flux of this Rainbow, and store the combined model in systematics-model and each individual component in systematic-model-{...}.

Parameters#

amplitude : float, optional The (standard deviation-ish) amplitude of the systematics in units normalized to 1. For example, an amplitude of 0.003 will produce systematic trends that tend to range (at 1 sigma) from 0.997 to 1.003. wavelike : list of strings, optional A list of wave-like cotrending quantities to serve as ingredients to a linear combination systematics model. Existing quantities will be pulled from the appropriate core dictionary; fake data will be created for quantities that don't already exist, from a cartoony Gaussian process model. timelike : list of strings, optional A list of time-like cotrending quantities to serve as ingredients to a linear combination systematics model. Existing quantities will be pulled from the appropriate core dictionary; fake data will be created for quantities that don't already exist, from a cartoony Gaussian process model. fluxlike : list of strings, optional A list of flux-like cotrending quantities to serve as ingredients to a linear combination systematics model. Existing quantities will be pulled from the appropriate core dictionary; fake data will be created for quantities that don't already exist, from a cartoony Gaussian process model.

Returns#

rainbow : Rainbow A new Rainbow object with the systematics injected.

Source code in chromatic/rainbows/actions/inject_systematics.py
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
def inject_systematics(
    self,
    amplitude=0.003,
    wavelike=[],
    timelike=["x", "y", "time"],
    fluxlike=["background"],
):

    """
    Inject some (very cartoony) instrumental systematics.

    Here's the basic procedure:

    1) Generate some fake variables that vary either just with
    wavelength, just with time, or with both time and wavelength.
    Store these variables for later use. For example, these might
    represent an average `x` and `y` centroid of the trace on the
    detector (one for each time), or the background flux associated
    with each wavelength (one for each time and for each wavelength).

    2) Generate a flux model as some function of those variables.
    In reality, we probably don't know the actual relationship
    between these inputs and the flux, but we can imagine one!

    3) Inject the model flux into the `flux` of this Rainbow,
    and store the combined model in `systematics-model` and
    each individual component in `systematic-model-{...}`.

    Parameters
    ----------
    amplitude : float, optional
        The (standard deviation-ish) amplitude of the systematics
        in units normalized to 1. For example, an amplitude of 0.003
        will produce systematic trends that tend to range (at 1 sigma)
        from 0.997 to 1.003.
    wavelike : list of strings, optional
        A list of wave-like cotrending quantities to serve as ingredients
        to a linear combination systematics model. Existing quantities
        will be pulled from the appropriate core dictionary; fake
        data will be created for quantities that don't already exist,
        from a cartoony Gaussian process model.
    timelike : list of strings, optional
        A list of time-like cotrending quantities to serve as ingredients
        to a linear combination systematics model. Existing quantities
        will be pulled from the appropriate core dictionary; fake
        data will be created for quantities that don't already exist,
        from a cartoony Gaussian process model.
    fluxlike : list of strings, optional
        A list of flux-like cotrending quantities to serve as ingredients
        to a linear combination systematics model. Existing quantities
        will be pulled from the appropriate core dictionary; fake
        data will be created for quantities that don't already exist,
        from a cartoony Gaussian process model.

    Returns
    -------
    rainbow : Rainbow
        A new Rainbow object with the systematics injected.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("inject_systematics", locals())

    # create a copy of the existing Rainbow
    new = self._create_copy()
    new.fluxlike["systematics_model"] = np.ones(self.shape)

    def standardize(q):
        """
        A quick helper to normalize all inputs to zero mean
        and unit standard deviation. It
        """
        offset = np.nanmean(q)
        sigma = np.nanstd(q)
        return u.Quantity((q - offset) / sigma).value, offset, sigma

    components = {}
    for k in wavelike:
        if k in self.wavelike:
            x, offset, sigma = standardize(self.wavelike[k])
        else:
            x = new._create_fake_wavelike_quantity()
            offset, sigma = 0, 1
            new.wavelike[k] = x
        c = np.random.normal(0, amplitude)
        df = c * x[:, np.newaxis] * np.ones(self.shape)
        new.fluxlike[f"systematics_model_from_{k}"] = df
        new.fluxlike["systematics_model"] += df
        components.update(
            **{
                f"linear_{k}": f"c_{k}*({k} - offset_{k})/sigma_{k}",
                f"c_{k}": c,
                f"offset_{k}": offset,
                f"sigma_{k}": sigma,
            }
        )

    for k in timelike:
        if k in self.timelike:
            x, offset, sigma = standardize(self.timelike[k])
        else:
            x = new._create_fake_timelike_quantity()
            offset, sigma = 0, 1
            new.timelike[k] = x
        c = np.random.normal(0, amplitude)
        df = c * x[np.newaxis, :] * np.ones(self.shape)
        new.fluxlike[f"systematics_model_from_{k}"] = df
        new.fluxlike["systematics_model"] += df
        components.update(
            **{
                f"linear_{k}": f"c_{k}*({k} - offset_{k})/sigma_{k}",
                f"c_{k}": c,
                f"offset_{k}": offset,
                f"sigma_{k}": sigma,
            }
        )

    for k in fluxlike:
        if k in self.fluxlike:
            x, offset, sigma = standardize(self.fluxlike[k])
        else:
            x = new._create_fake_fluxlike_quantity()
            offset, sigma = 0, 1
            new.fluxlike[k] = x
        c = np.random.normal(0, amplitude)
        df = c * x * np.ones(self.shape)
        new.fluxlike[f"systematics_model_from_{k}"] = df
        new.fluxlike["systematics_model"] += df
        components.update(
            **{
                f"linear_{k}": f"c_{k}*({k} - offset_{k})/sigma_{k}",
                f"c_{k}": c,
                f"offset_{k}": offset,
                f"sigma_{k}": sigma,
            }
        )

    new.metadata["systematics_components"] = components
    new.metadata["systematics_equation"] = "f = 1\n  + " + "\n  + ".join(
        [v for k, v in components.items() if k[:7] == "linear_"]
    )

    # modify both the model and flux arrays
    new.flux *= new.systematics_model
    new.model = new.fluxlike.get("model", 1) * new.systematics_model

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new object
    return new

Simulate a wavelength-dependent planetary transit.

This uses one of a few methods to inject a transit signal into the Rainbow, allowing the transit depth to change with wavelength (for example due to a planet's effective radius changing with wavelength due to its atmospheric transmission spectrum). Other parameters can also be wavlength-dependent, but some (like period, inclination, etc...) probably shouldn't be.

The current methods include:

'trapezoid' to inject a cartoon transit, using nomenclature from Winn (2010). This is the default method, to avoid package dependencies that can be finicky to compile and/or install on different operating systems.

'exoplanet' to inject a limb-darkened transit using exoplanet-core. This option requires exoplanet-core be installed, but it doesn't require complicated dependencies or compiling steps, so it's already included as a dependency.

'batman' to inject a limb-darkened transit using batman-package This method requires that batman-package be installed, and it will try to throw a helpful warning message if it's not.

Parameters#

planet_radius : float, array, None The planet-to-star radius ratio = [transit depth]0.5, which can be either a single value for all wavelengths, or an array with one value for each wavelength. method : str What method should be used to inject the transits? Different methods will produce different results and have different options. The currently implement options are 'trapezoid' and 'batman'. transit_parameters : dict All additional keywords will be passed to the transit model. The accepted keywords for the different methods are as follows. 'trapezoid' accepts the following keyword arguments: delta = The depth of the transit, as a fraction of the out-of-transit flux (default 0.01) (If not provided, it will be set by planet_radius.) P = The orbital period of the planet, in days (default 3.0) t0 = Mid-transit time of the transit, in days (default 0.0) T = The duration of the transit (from mid-ingress to mid-egress), in days (default 0.1) tau = The duration of ingress/egress, in days (default 0.01) baseline = The baseline, out-of-transit flux level (default 1.0) 'exoplanet-core' accepts the following keyword arguments: rp = (planet radius)/(star radius), unitless (default 0.1) (If not provided, it will be set by planet_radius.) t0 = Mid-transit time of the transit, in days (default 0.0) per = The orbital period of the planet, in days (default 3.0) a = (semi-major axis)/(star radius), unitless (default 10) inc = The orbital inclination, in degrees (default 90) ecc = The orbital eccentricity, unitless (default 0.0) w = The longitude of periastron, in degrees (default 0.0) u = The quadratic limb-darkening coefficients (default [0.2, 0.2]) These coefficients can only be a 2D array of the form (n_wavelengths, n_coefficients) where each row is the set of limb-darkening coefficients corresponding to a single wavelength 'batman' accepts the following keyword arguments: rp = (planet radius)/(star radius), unitless (default 0.1) (If not provided, it will be set by planet_radius.) t0 = Mid-transit time of the transit, in days (default 0.0) per = The orbital period of the planet, in days (default 1.0) a = (semi-major axis)/(star radius), unitless (default 10) inc = The orbital inclination, in degrees (default 90) ecc = The orbital eccentricity, unitless (default 0.0) w = The longitude of periastron, in degrees (default 0.0) limb_dark = The limb-darkening model (default "quadratic"), possible values described in more detail in batman documentation. u = The limb-darkening coefficients (default [0.2, 0.2]) These coefficients can be: -one value (if limb-darkening law requires only one value) -a 1D list/array of coefficients for constant limb-darkening -a 2D array of the form (n_wavelengths, n_coefficients) where each row is the set of limb-darkening coefficients corresponding to a single wavelength Note that this currently does not calculate the appropriate coefficient vs wavelength variations itself; there exist codes (such as hpparvi/PyLDTk and nespinoza/limb-darkening) which can be used for this.

Source code in chromatic/rainbows/actions/inject_transit.py
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
def inject_transit(
    self,
    planet_radius=0.1,
    method="exoplanet",
    **transit_parameters,
):

    """
    Simulate a wavelength-dependent planetary transit.

    This uses one of a few methods to inject a transit
    signal into the `Rainbow`, allowing the transit
    depth to change with wavelength (for example due to a
    planet's effective radius changing with wavelength due
    to its atmospheric transmission spectrum). Other
    parameters can also be wavlength-dependent, but
    some (like period, inclination, etc...) probably
    shouldn't be.

    The current methods include:

    `'trapezoid'` to inject a cartoon transit, using nomenclature
    from [Winn (2010)](https://arxiv.org/abs/1001.2010).
    This is the default method, to avoid package dependencies
    that can be finicky to compile and/or install on different
    operating systems.

    `'exoplanet'` to inject a limb-darkened transit using [exoplanet-core](https://github.com/exoplanet-dev/exoplanet-core).
    This option requires `exoplanet-core` be installed,
    but it doesn't require complicated dependencies or
    compiling steps, so it's already included as a dependency.

    `'batman'` to inject a limb-darkened transit using [batman-package](https://lkreidberg.github.io/batman/docs/html/index.html)
    This method requires that `batman-package` be installed,
    and it will try to throw a helpful warning message if
    it's not.

    Parameters
    ----------
    planet_radius : float, array, None
        The planet-to-star radius ratio = [transit depth]**0.5,
        which can be either a single value for all wavelengths,
        or an array with one value for each wavelength.
    method : str
        What method should be used to inject the transits? Different
        methods will produce different results and have different options.
        The currently implement options are `'trapezoid'` and `'batman'`.
    **transit_parameters : dict
        All additional keywords will be passed to the transit model.
        The accepted keywords for the different methods are as follows.
            `'trapezoid'` accepts the following keyword arguments:
                `delta` = The depth of the transit, as a fraction of the out-of-transit flux (default 0.01)
                (If not provided, it will be set by `planet_radius`.)
                `P` = The orbital period of the planet, in days (default 3.0)
                `t0` = Mid-transit time of the transit, in days (default 0.0)
                `T` = The duration of the transit (from mid-ingress to mid-egress), in days (default 0.1)
                `tau` = The duration of ingress/egress, in days (default 0.01)
                `baseline` = The baseline, out-of-transit flux level (default 1.0)
            `'exoplanet-core'` accepts the following keyword arguments:
                `rp` = (planet radius)/(star radius), unitless (default 0.1)
                (If not provided, it will be set by `planet_radius`.)
                `t0` = Mid-transit time of the transit, in days (default 0.0)
                `per` = The orbital period of the planet, in days (default 3.0)
                `a` = (semi-major axis)/(star radius), unitless (default 10)
                `inc` = The orbital inclination, in degrees (default 90)
                `ecc` = The orbital eccentricity, unitless (default 0.0)
                `w` = The longitude of periastron, in degrees (default 0.0)
                `u` = The quadratic limb-darkening coefficients (default [0.2, 0.2])
                    These coefficients can only be a 2D array of the form (n_wavelengths, n_coefficients) where
                    each row is the set of limb-darkening coefficients corresponding
                    to a single wavelength
            `'batman'` accepts the following keyword arguments:
                `rp` = (planet radius)/(star radius), unitless (default 0.1)
                (If not provided, it will be set by `planet_radius`.)
                `t0` = Mid-transit time of the transit, in days (default 0.0)
                `per` = The orbital period of the planet, in days (default 1.0)
                `a` = (semi-major axis)/(star radius), unitless (default 10)
                `inc` = The orbital inclination, in degrees (default 90)
                `ecc` = The orbital eccentricity, unitless (default 0.0)
                `w` = The longitude of periastron, in degrees (default 0.0)
                `limb_dark` = The limb-darkening model (default "quadratic"), possible
                    values described in more detail in batman documentation.
                `u` = The limb-darkening coefficients (default [0.2, 0.2])
                    These coefficients can be:
                        -one value (if limb-darkening law requires only one value)
                        -a 1D list/array of coefficients for constant limb-darkening
                        -a 2D array of the form (n_wavelengths, n_coefficients) where
                        each row is the set of limb-darkening coefficients corresponding
                        to a single wavelength
                    Note that this currently does not calculate the appropriate
                    coefficient vs wavelength variations itself; there exist codes
                    (such as hpparvi/PyLDTk and nespinoza/limb-darkening) which
                    can be used for this.


    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("inject_transit", locals())
    h = h.replace("transit_parameters={", "**{")

    # create a copy of the existing Rainbow
    new = self._create_copy()

    # make sure the depth is set, with some flexibility
    # to allow for different names. parameter names that
    # belong directly to the transit model [delta, rp]
    # will take precendence first, then [depth], then
    # [planet_radius = the default]

    # set defaults for planet simulation
    if method == "trapezoid":
        parameters_to_use = {
            "delta": planet_radius**2 * np.sign(planet_radius),
            "P": 1.0,
            "t0": 0.0,
            "T": 0.1,
            "tau": 0.01,
            "baseline": 1.0,
        }
    elif method == "exoplanet":
        parameters_to_use = {
            "rp": planet_radius,
            "t0": 0.0,
            "per": 3.0,
            "a": 10.0,
            "inc": 90.0,
            "ecc": 0.0,
            "w": 0.0,
            "u": [[0.2, 0.2]],
        }
    elif method == "batman":
        parameters_to_use = {
            "rp": planet_radius,
            "t0": 0.0,
            "per": 3.0,
            "a": 10.0,
            "inc": 90.0,
            "ecc": 0.0,
            "w": 0.0,
            "limb_dark": "quadratic",
            "u": [[0.2, 0.2]],
        }
    else:
        raise ValueError(
            f"""
        'method' must be one of ['exoplanet', 'trapezoid', 'batman']
        """
        )

    # update based on explicit keyword arguments
    parameters_to_use.update(**transit_parameters)

    # check the parameter shapes are legitimate
    for k, v in parameters_to_use.items():
        s = np.shape(v)
        if (s != ()) and (s[0] not in [1, new.nwave]):
            raise ValueError(
                f"""
            The parameter {k}={v}
            has a shape of {np.shape(v)}, which we don't know
            how to interpret. It should be a single value,
            or have a first dimension of either 1 or nwave={new.nwave}.
            """
            )

    # call the model for each wavelength
    t = new.time.to_value("day")
    cached_inputs = {}
    planet_flux = np.ones(new.shape)
    for i in range(self.nwave):
        parameters_for_this_wavelength = {
            k: get_for_wavelength(parameters_to_use[k], i) for k in parameters_to_use
        }
        f = transit_model_functions[method]
        monochromatic_flux, cached_inputs = f(
            t, **parameters_for_this_wavelength, **cached_inputs
        )
        planet_flux[i, :] = monochromatic_flux

    # store the model in the new Rainbow object
    new.planet_model = planet_flux
    new.flux *= new.planet_model
    new.model = new.fluxlike.get("model", 1) * new.planet_model

    # store the injected parameters as metadata or wavelike
    new.metadata["injected_transit_method"] = method
    new.metadata["injected_transit_parameters"] = parameters_to_use
    for k, v in parameters_to_use.items():
        label = f"injected_transit_{k}"
        s = np.shape(v)
        if s == ():
            continue
        elif s[0] == new.nwave:
            if len(s) == 1:
                new.wavelike[label] = v
            elif len(s) > 1:
                for i in range(s[1]):
                    new.wavelike[f"{label}{i+1}"] = v[:, i]

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new object
    return new

normalize(self, axis='wavelength', percentile=50) #

Normalize by dividing through by the median spectrum and/or lightcurve.

This normalizes a Rainbow by estimating dividing through by a wavelength-dependent normalization. With default inputs, this would normalize each wavelength to have flux values near 1, to make it easier to see differences across time (such as a transit or eclipse). This function could also be used to divide through by a median light curve, to make it easier to see variations across wavelength.

Parameters#

axis : str The axis that should be normalized out. w or wave or wavelength will divide out the typical spectrum. t or time will divide out the typical light curve

float

A number between 0 and 100, specifying the percentile of the data along an axis to use as the reference. The default of percentile=50 corresponds to the median. If you want to normalize to out-of-transit, maybe you want a higher percentile. If you want to normalize to the baseline below a flare, maybe you want a lower percentage.

Returns#

normalized : Rainbow The normalized Rainbow.

Source code in chromatic/rainbows/actions/normalization.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
def normalize(self, axis="wavelength", percentile=50):
    """
    Normalize by dividing through by the median spectrum and/or lightcurve.

    This normalizes a `Rainbow` by estimating dividing
    through by a wavelength-dependent normalization. With
    default inputs, this would normalize each wavelength
    to have flux values near 1, to make it easier to see
    differences across time (such as a transit or eclipse).
    This function could also be used to divide through by
    a median light curve, to make it easier to see variations
    across wavelength.

    Parameters
    ----------
    axis : str
        The axis that should be normalized out.
        `w` or `wave` or `wavelength` will divide out the typical spectrum.
        `t` or `time` will divide out the typical light curve

    percentile : float
        A number between 0 and 100, specifying the percentile
        of the data along an axis to use as the reference.
        The default of `percentile=50` corresponds to the median.
        If you want to normalize to out-of-transit, maybe you
        want a higher percentile. If you want to normalize to
        the baseline below a flare, maybe you want a lower
        percentage.

    Returns
    -------
    normalized : Rainbow
        The normalized Rainbow.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("normalize", locals())

    # create an empty copy
    new = self._create_copy()

    # shortcut for the first letter of the axis
    a = axis.lower()[0]

    # (ignore nan warnings)
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")

        # get fluxes, with not-OK replaced with nans
        flux_for_normalizing = new.get_ok_data()
        negative_normalization_message = ""
        if a == "w":
            normalization = np.nanpercentile(
                flux_for_normalizing, percentile, axis=self.timeaxis
            )
            for k in self._keys_that_respond_to_math:
                new.fluxlike[k] = new.get(k) / normalization[:, np.newaxis]
            try:
                new.fluxlike["uncertainty"] = (
                    self.uncertainty / normalization[:, np.newaxis]
                )
            except ValueError:
                pass

        elif a == "t":
            normalization = np.nanpercentile(
                flux_for_normalizing, percentile, axis=self.waveaxis
            )
            for k in self._keys_that_respond_to_math:
                new.fluxlike[k] = new.get(k) / normalization[np.newaxis, :]
            try:
                new.fluxlike["uncertainty"] = (
                    self.uncertainty / normalization[np.newaxis, :]
                )
            except ValueError:
                pass

    if a in "wt":
        thing = {"w": "wavelengths", "t": "times"}[a]
        fix = {
            "w": """
                ok = rainbow.get_median_spectrum() > 0
                rainbow[ok, :].normalize()
        """,
            "t": """
                ok = rainbow.get_median_lightcurve() > 0
                rainbow[:, ok].normalize()
        """,
        }[a]
        if np.any(normalization < 0):
            cheerfully_suggest(
                f"""
            There are {np.sum(normalization < 0)} negative {thing} that
            are going into the normalization of this Rainbow. If you're
            not expecting negative fluxes, it may be useful to trim them
            away with something like:

            {fix}

            Otherwise, watch out that your fluxes and uncertainties may
            potentially have flipped sign!
            """
            )

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new Rainbow
    return new

__add__(self, other) #

Add the flux of a rainbow and an input array (or another rainbow) and output in a new rainbow other.

Parameters#

other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.

Returns#

rainbow : Rainbow A new Rainbow with the mathematical operation applied.

Source code in chromatic/rainbows/actions/operations.py
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
def __add__(self, other):
    """
    Add the flux of a rainbow and an input array (or another rainbow)
    and output in a new rainbow other.

    Parameters
    ----------
    other : Array or float.
        Multiple options:
        1) float
        2) 1D array with same length as wavelength axis
        3) 1D array with same length as time axis
        4) 2D array with same shape as rainbow flux
        5) Rainbow other with same dimensions as self.

    Returns
    ----------
    rainbow : Rainbow
        A new `Rainbow` with the mathematical operation applied.
    """

    # create the history entry
    h = self._create_history_entry("+", locals())

    # calculate a new Rainbow using the operation and error propagation
    result = self._apply_operation(other, operation=np.add, dzdx="1", dzdy="1")

    # append the history entry to the new Rainbow
    result._record_history_entry(h)

    return result

__eq__(self, other) #

Test whether self == other.

This compares the wavelike, timelike, and fluxlike arrays for exact matches. It skips entirely over the metadata.

Parameters#

other : Rainbow Another Rainbow to compare to.

Returns#

equal : bool Are they (effectively) equivalent?

Source code in chromatic/rainbows/actions/operations.py
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
def __eq__(self, other):
    """
    Test whether `self == other`.

    This compares the wavelike, timelike, and fluxlike arrays
    for exact matches. It skips entirely over the metadata.

    Parameters
    ----------
    other : Rainbow
        Another `Rainbow` to compare to.

    Returns
    -------
    equal : bool
        Are they (effectively) equivalent?
    """
    # start by assuming the Rainbows are identical
    same = True

    for a, b in zip([self, other], [other, self]):

        # loop through the core dictionaries
        for d in a._core_dictionaries:
            if d == "metadata":
                continue
            # pull out each core dictionary from both
            d1, d2 = vars(self)[d], vars(b)[d]
            same *= set(d1.keys()) == set(d2.keys())

            # loop through elements of each dictionary
            for k in d1:

                # ignore different histories (e.g. new vs loaded)
                if k != "history":

                    # test that all elements match for both
                    if d == "fluxlike":
                        same *= np.all(
                            np.isclose(
                                a.get(k)[a.ok.astype(bool)], b.get(k)[a.ok.astype(bool)]
                            )
                        )
                    else:
                        same *= np.all(np.isclose(a.get(k), b.get(k)))

    return bool(same)

__mul__(self, other) #

Multiply the flux of a rainbow and an input array (or another rainbow) and output in a new rainbow other.

Parameters#

other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.

Returns#

rainbow : Rainbow A new Rainbow with the mathematical operation applied.

Source code in chromatic/rainbows/actions/operations.py
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
def __mul__(self, other):
    """
    Multiply the flux of a rainbow and an input array (or another rainbow)
    and output in a new rainbow other.

    Parameters
    ----------
    other : Array or float.
        Multiple options:
        1) float
        2) 1D array with same length as wavelength axis
        3) 1D array with same length as time axis
        4) 2D array with same shape as rainbow flux
        5) Rainbow other with same dimensions as self.

    Returns
    ----------
    rainbow : Rainbow
        A new `Rainbow` with the mathematical operation applied.
    """

    # create the history entry
    h = self._create_history_entry("*", locals())

    # calculate a new Rainbow using the operation and error propagation
    result = self._apply_operation(other, operation=np.multiply, dzdx="y", dzdy="x")

    # append the history entry to the new Rainbow
    result._record_history_entry(h)

    return result

__sub__(self, other) #

Subtract the flux of a rainbow from an input array (or another rainbow) and output in a new rainbow other.

Parameters#

other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.

Returns#

rainbow : Rainbow A new Rainbow with the mathematical operation applied.

Source code in chromatic/rainbows/actions/operations.py
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
def __sub__(self, other):
    """
    Subtract the flux of a rainbow from an input array (or another rainbow)
    and output in a new rainbow other.

    Parameters
    ----------
    other : Array or float.
        Multiple options:
        1) float
        2) 1D array with same length as wavelength axis
        3) 1D array with same length as time axis
        4) 2D array with same shape as rainbow flux
        5) Rainbow other with same dimensions as self.

    Returns
    ----------
    rainbow : Rainbow
        A new `Rainbow` with the mathematical operation applied.
    """
    # create the history entry
    h = self._create_history_entry("-", locals())

    # calculate a new Rainbow using the operation and error propagation
    result = self._apply_operation(other, operation=np.subtract, dzdx="1", dzdy="1")

    # append the history entry to the new Rainbow
    result._record_history_entry(h)

    return result

__truediv__(self, other) #

Divide the flux of a rainbow and an input array (or another rainbow) and output in a new rainbow other.

Parameters#

other : Array or float. Multiple options: 1) float 2) 1D array with same length as wavelength axis 3) 1D array with same length as time axis 4) 2D array with same shape as rainbow flux 5) Rainbow other with same dimensions as self.

Returns#

rainbow : Rainbow A new Rainbow with the mathematical operation applied.

Source code in chromatic/rainbows/actions/operations.py
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
def __truediv__(self, other):
    """
    Divide the flux of a rainbow and an input array (or another rainbow)
    and output in a new rainbow other.

    Parameters
    ----------
    other : Array or float.
        Multiple options:
        1) float
        2) 1D array with same length as wavelength axis
        3) 1D array with same length as time axis
        4) 2D array with same shape as rainbow flux
        5) Rainbow other with same dimensions as self.

    Returns
    ----------
    rainbow : Rainbow
        A new `Rainbow` with the mathematical operation applied.
    """
    # create the history entry
    h = self._create_history_entry("/", locals())

    # calculate a new Rainbow using the operation and error propagation
    result = self._apply_operation(
        other, operation=np.true_divide, dzdx="1/y", dzdy="-x/y**2"
    )

    # append the history entry to the new Rainbow
    result._record_history_entry(h)

    return result

diff(self, other) #

Test whether self == other, and print the differences.

This compares the wavelike, timelike, and fluxlike arrays for exact matches. It skips entirely over the metadata. The diff function is the same as __eq__, but a little more verbose, just to serve as a helpful debugging tool.

Parameters#

other : Rainbow Another Rainbow to compare to.

Returns#

equal : bool Are they (effectively) equivalent?

Source code in chromatic/rainbows/actions/operations.py
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
def diff(self, other):
    """
    Test whether `self == other`, and print the differences.

    This compares the wavelike, timelike, and fluxlike arrays
    for exact matches. It skips entirely over the metadata.
    The `diff` function is the same as `__eq__`, but a little
    more verbose, just to serve as a helpful debugging tool.

    Parameters
    ----------
    other : Rainbow
        Another `Rainbow` to compare to.

    Returns
    -------
    equal : bool
        Are they (effectively) equivalent?
    """
    # start by assuming the Rainbows are identical
    same = True

    for a, b in zip([self, other], [other, self]):

        # loop through the core dictionaries
        for d in a._core_dictionaries:
            if d == "metadata":
                continue
            # pull out each core dictionary from both
            d1, d2 = vars(self)[d], vars(b)[d]
            if set(d1.keys()) != set(d2.keys()):
                differences = list(set(d1.keys()) - set(d2.keys()))
                print(f"{a}.{d} has {differences} and {b} does not")

            # loop through elements of each dictionary
            for k in d1:
                # ignore different histories (e.g. new vs loaded)
                if k != "history":
                    continue
                # test that all elements match for both
                if np.all(a.get(k) == b.get(k)) == False:
                    print(f"{a}.{d}[{k}] != {b}.{d}[{k}]")

A quick tool to approximately remove trends.

This function provides some simple tools for kludgily removing trends from a Rainbow, through a variety of filtering methods. If you just want to remove all slow trends, whether astrophysical or instrumental, options like the median_filter or savgol_filter will effectively suppress all trends on timescales longer than their filtering window. If you want a more restricted approach to removing long trends, the polyfit option allows you to fit out slow trends.

method : str, optional What method should be used to make an approximate model for smooth trends that will then be subtracted off? differences will do an extremely rough filtering of replacing the fluxes with their first differences. Trends that are smooth relative to the noise will be removed this way, but sharp features will remain. Required keywords: None. median_filter is a wrapper for scipy.signal.median_filter. It smoothes each data point to the median of its surrounding points in time and/or wavelength. Required keywords: size = centered on each point, what shape rectangle should be used to select surrounding points for median? The dimensions are (nwavelengths, ntimes), so size=(3,7) means we'll take the median across three wavelengths and seven times. Default is (1,5). savgol_filter is a wrapper for scipy.signal.savgol_filter. It applies a Savitzky-Golay filter for polynomial smoothing. Required keywords: window_length = the length of the filter window, which must be a positive odd integer. Default is 5. polyorder = the order of the polynomial to use. Default is 2. polyfit is a wrapper for numpy.polyfit to use a weighted linear least squares polynomial fit to remove smooth trends in time. Required keywods: deg = the polynomial degree, which must be a positive integer. Default is 1, meaning a line. custom allow users to pass any fluxlike array of model values for an astrophysical signal to remove it. Required keywords: model = the (nwavelengths, ntimes) model array **kw : dict, optional Any additional keywords will be passed to the function that does the filtering. See method keyword for options.

removed : Rainbow The Rainbow with estimated signals removed.

Source code in chromatic/rainbows/actions/remove_trends.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
def remove_trends(self, method="median_filter", **kw):
    """
    A quick tool to approximately remove trends.

    This function provides some simple tools for kludgily
    removing trends from a `Rainbow`, through a variety of
    filtering methods. If you just want to remove all
    slow trends, whether astrophysical or instrumental,
    options like the `median_filter` or `savgol_filter`
    will effectively suppress all trends on timescales
    longer than their filtering window. If you want a
    more restricted approach to removing long trends,
    the `polyfit` option allows you to fit out slow trends.

    Parameters
    ----------
    method : str, optional
        What method should be used to make an approximate model
        for smooth trends that will then be subtracted off?
        `differences` will do an extremely rough filtering
        of replacing the fluxes with their first differences.
        Trends that are smooth relative to the noise will
        be removed this way, but sharp features will remain.
        Required keywords:
            None.
        `median_filter` is a wrapper for scipy.signal.median_filter.
        It smoothes each data point to the median of its surrounding
        points in time and/or wavelength. Required keywords:
            `size` = centered on each point, what shape rectangle
            should be used to select surrounding points for median?
            The dimensions are (nwavelengths, ntimes), so `size=(3,7)`
            means we'll take the median across three wavelengths and
            seven times. Default is `(1,5)`.
        `savgol_filter` is a wrapper for scipy.signal.savgol_filter.
        It applies a Savitzky-Golay filter for polynomial smoothing.
        Required keywords:
            `window_length` = the length of the filter window,
            which must be a positive odd integer. Default is `5`.
            `polyorder` = the order of the polynomial to use.
            Default is `2`.
        `polyfit` is a wrapper for numpy.polyfit to use a weighted
        linear least squares polynomial fit to remove smooth trends
        in time. Required keywods:
            `deg` = the polynomial degree, which must be a positive
            integer. Default is `1`, meaning a line.
        `custom` allow users to pass any fluxlike array of model
        values for an astrophysical signal to remove it. Required
        keywords:
            `model` = the (nwavelengths, ntimes) model array
    **kw : dict, optional
        Any additional keywords will be passed to the function
        that does the filtering. See `method` keyword for options.

    Returns
    -------
    removed : Rainbow
        The Rainbow with estimated signals removed.
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("remove_trends", locals())

    # TODO, think about more careful treatment of uncertainties + good/bad data
    new = self._create_copy()

    if method == "differences":
        new.flux = np.sqrt(2) * np.gradient(new.flux, axis=0) + 1

    #    if method == "butter_highpass":
    #        for i in range (0,new.nwave):
    #            nyq = 0.5 * butter_fs
    #            normal_cutoff = butter_cutoff/nyq
    #            b, a = butter(butter_order, normal_cutoff, btype = "high", analog = False)
    #            butter_filt = filtfilt(b, a, new.flux[i,:])
    #            new.flux[i,:] = new.flux[i,:]/butter_filt
    #
    #    if method == "convolve":
    #        for i in range (0,new.nwave):
    #            box = np.ones(win_length)/win_length
    #            grad = np.convolve(new.flux[i,:], box, mode = "same")
    #            new.flux[i,:] = new.flux[i,:]/grad

    if method == "median_filter":
        kw_to_use = dict(size=(1, 11))
        kw_to_use.update(**kw)
        if "size" not in kw:
            cheerfully_suggest(
                f"""
            You didn't supply all expected keywords for '{method}'.
            Relying on defaults, the values will be:
            {kw_to_use}
            """
            )
        medfilt = median_filter(self.flux, **kw_to_use)
        new.flux = self.flux / medfilt
        new.uncertainty = self.uncertainty / medfilt

    if method == "savgol_filter":
        kw_to_use = dict(window_length=11, polyorder=1)
        kw_to_use.update(**kw)
        if ("window_length" not in kw) or ("polyorder" not in kw):
            cheerfully_suggest(
                f"""
            You didn't supply all expected keywords for '{method}'.
            Relying on defaults, the values will be:
            {kw_to_use}
            """
            )
        for i in range(new.nwave):
            savgolfilter = savgol_filter(self.flux[i, :], **kw_to_use)
            new.flux[i, :] = self.flux[i, :] / savgolfilter
            new.uncertainty[i, :] = self.uncertainty[i, :] / savgolfilter

    if method == "polyfit":
        kw_to_use = dict(deg=1)
        kw_to_use.update(**kw)
        if "deg" not in kw:
            cheerfully_suggest(
                f"""
            You didn't supply all expected keywords for '{method}'.
            Relying on defaults, the values will be:
            {kw_to_use}
            """
            )
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")
            for i in range(new.nwave):
                x, y, sigma = self.get_ok_data_for_wavelength(
                    i, express_badness_with_uncertainty=True
                )
                ok = np.isfinite(y)
                if np.sum(ok) >= 2:
                    try:
                        coefs = np.polyfit(
                            x=remove_unit(x)[ok],
                            y=remove_unit(y)[ok],
                            w=1 / remove_unit(sigma)[ok],
                            **kw_to_use,
                        )
                        poly = np.polyval(coefs, remove_unit(x))
                        new.flux[i, :] = self.flux[i, :] / poly
                        new.uncertainty[i, :] = self.uncertainty[i, :] / poly
                    except:
                        pass

    if method == "custom":
        if "model" not in kw:
            raise ValueError("You need a fluxlike `model` for this `custom` method")
        elif kw["model"].shape != new.flux.shape:
            raise ValueError("Your model doesn't match flux shape")
        else:
            new.flux = new.flux / kw["model"]
            new.uncertainty = new.uncertainty / kw["model"]

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new Rainbow
    return new

Doppler shift the wavelengths of this Rainbow.

This shifts the wavelengths in a Rainbow by applying a velocity shift. Positive velocities make wavelengths longer (redshift); negative velocities make wavelengths shorter (bluesfhit).

Parameters#

velocity : Quantity The systemic velocity by which we should shift, with units of velocity (for example, u.km/u.s)

Source code in chromatic/rainbows/actions/shift.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
def shift(self, velocity=0 * u.km / u.s):
    """
    Doppler shift the wavelengths of this `Rainbow`.

    This shifts the wavelengths in a `Rainbow` by
    applying a velocity shift. Positive velocities make
    wavelengths longer (redshift); negative velocities make
    wavelengths shorter (bluesfhit).

    Parameters
    ----------
    velocity : Quantity
        The systemic velocity by which we should shift,
        with units of velocity (for example, u.km/u.s)
    """

    # create a history entry for this action (before other variables are defined)
    h = self._create_history_entry("shift", locals())

    # create a new copy of this rainbow
    new = self._create_copy()

    # get the speed of light from astropy constants
    lightspeed = con.c.to("km/s")  # speed of light in km/s

    # calculate beta and make sure the units cancel
    beta = (velocity / lightspeed).decompose()

    # apply wavelength shift
    new_wavelength = new.wavelength * np.sqrt((1 + beta) / (1 - beta))
    new.wavelike["wavelength"] = new_wavelength

    # append the history entry to the new Rainbow
    new._record_history_entry(h)

    # return the new object
    return new

Trim away bad wavelengths and/or times.

If entire wavelengths or times are marked as not ok, we can probably remove them to simplify calculations and visualizations. This function will trim those away, by default only removing problem rows/columns on the ends, to maintain a contiguous block.

Parameters#

t_min : u.Quantity The minimum time to keep. t_max : u.Quantity The maximum time to keep. w_min : u.Quantity The minimum wavelength to keep. w_max : u.Quantity The maximum wavelength to keep. just_edges : bool, optional Should we only trim the outermost bad wavelength bins? True = Just trim off the bad edges and keep interior bad values. Keeping interior data, even if they're all bad, often helps to make for more intuititive imshow plots. False = Trim off every bad wavelength, whether it's on the edge or somewhere in the middle of the dataset. The resulting Rainbow will be smaller, but it might be a little tricky to visualize with imshow. when_to_give_up : float, optional The fraction of times that must be nan or not OK for the entire wavelength to be considered bad (default = 1). 1.0 = trim only if all times are bad 0.5 = trim if more than 50% of times are bad 0.0 = trim if any times are bad minimum_acceptable_ok : float, optional The numbers in the .ok attribute express "how OK?" each data point is, ranging from 0 (not OK) to 1 (super OK). In most cases, .ok will be binary, but there may be times where it's intermediate (for example, if a bin was created from some data that were not OK and some that were). The minimum_acceptable_ok parameter allows you to specify what level of OK-ness for a point to not get trimmed.

Returns#

trimmed : Rainbow The trimmed Rainbow.

Source code in chromatic/rainbows/actions/trim.py
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
def trim(
    self,
    t_min=None,
    t_max=None,
    w_min=None,
    w_max=None,
    just_edges=True,
    when_to_give_up=1,
    minimum_acceptable_ok=1,
):
    """
    Trim away bad wavelengths and/or times.

    If entire wavelengths or times are marked as not `ok`,
    we can probably remove them to simplify calculations
    and visualizations. This function will trim those away,
    by default only removing problem rows/columns on the ends,
    to maintain a contiguous block.

    Parameters
    ----------
    t_min : u.Quantity
        The minimum time to keep.
    t_max : u.Quantity
        The maximum time to keep.
    w_min : u.Quantity
        The minimum wavelength to keep.
    w_max : u.Quantity
        The maximum wavelength to keep.
    just_edges : bool, optional
        Should we only trim the outermost bad wavelength bins?
            `True` = Just trim off the bad edges and keep
            interior bad values. Keeping interior data, even if
            they're all bad, often helps to make for more
            intuititive imshow plots.
            `False` = Trim off every bad wavelength, whether it's on
            the edge or somewhere in the middle of the dataset.
            The resulting Rainbow will be smaller, but it might
            be a little tricky to visualize with imshow.
    when_to_give_up : float, optional
        The fraction of times that must be nan or not OK
        for the entire wavelength to be considered bad (default = 1).
            `1.0` = trim only if all times are bad
            `0.5` = trim if more than 50% of times are bad
            `0.0` = trim if any times are bad
    minimum_acceptable_ok : float, optional
        The numbers in the `.ok` attribute express "how OK?" each
        data point is, ranging from 0 (not OK) to 1 (super OK).
        In most cases, `.ok` will be binary, but there may be times
        where it's intermediate (for example, if a bin was created
        from some data that were not OK and some that were).
        The `minimum_acceptable_ok` parameter allows you to specify what
        level of OK-ness for a point to not get trimmed.

    Returns
    -------
    trimmed : Rainbow
        The trimmed `Rainbow`.
    """

    trimmed = self.trim_times(
        t_min=t_min,
        t_max=t_max,
        when_to_give_up=when_to_give_up,
        just_edges=just_edges,
        minimum_acceptable_ok=minimum_acceptable_ok,
    )
    trimmed = trimmed.trim_wavelengths(
        w_min=w_min,
        w_max=w_max,
        when_to_give_up=when_to_give_up,
        just_edges=just_edges,
        minimum_acceptable_ok=minimum_acceptable_ok,
    )

    return trimmed

🌈 Get/Timelike#

get_average_lightcurve(self) #

Return a lightcurve of the star, averaged over all wavelengths.

This uses bin, which is a horribly slow way of doing what is fundamentally a very simply array calculation, because we don't need to deal with partial pixels.

Returns#

lightcurve : array Timelike array of fluxes.

Source code in chromatic/rainbows/get/timelike/average_lightcurve.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
def get_average_lightcurve(self):
    """
    Return a lightcurve of the star, averaged over all wavelengths.

    This uses `bin`, which is a horribly slow way of doing what is
    fundamentally a very simply array calculation, because we
    don't need to deal with partial pixels.

    Returns
    -------
    lightcurve : array
        Timelike array of fluxes.
    """
    return self.get_average_lightcurve_as_rainbow().flux[0, :]

get_median_lightcurve(self) #

Return a lightcurve of the star, medianed over all wavelengths.

Returns#

median_lightcurve : array Timelike array of fluxes.

Source code in chromatic/rainbows/get/timelike/median_lightcurve.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
def get_median_lightcurve(self):
    """
    Return a lightcurve of the star, medianed over all wavelengths.

    Returns
    -------
    median_lightcurve : array
        Timelike array of fluxes.
    """
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        return np.nanmedian(self.get_ok_data(), axis=0)

get_for_time(self, i, quantity='flux') #

Get 'quantity' associated with time 'i'.

Parameters#

i : int The time index to retrieve. quantity : string The quantity to retrieve. If it is flux-like, column 'i' will be returned. If it is wave-like, the array itself will be returned.

Returns#

quantity : array, Quantity The 1D array of 'quantity' corresponding to time 'i'.

Source code in chromatic/rainbows/get/timelike/subset.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
def get_for_time(self, i, quantity="flux"):
    """
    Get `'quantity'` associated with time `'i'`.

    Parameters
    ----------
    i : int
        The time index to retrieve.
    quantity : string
        The quantity to retrieve. If it is flux-like,
        column 'i' will be returned. If it is wave-like,
        the array itself will be returned.

    Returns
    -------
    quantity : array, Quantity
        The 1D array of 'quantity' corresponding to time 'i'.
    """
    z = self.get(quantity)
    if np.shape(z) == self.shape:
        return z[:, i]
    elif len(z) == self.nwave:
        return z
    else:
        raise RuntimeError(
            f"""
        You tried to retrieve time {i} from '{quantity}',
        but this quantity is neither flux-like nor wave-like.
        It's not possible to return a wave-like array. Sorry!
        """
        )

get_ok_data_for_time(self, i, x='wavelength', y='flux', sigma='uncertainty', minimum_acceptable_ok=1, express_badness_with_uncertainty=False) #

A small wrapper to get the good data from a time.

Extract a slice of data, marking data that are not ok either by trimming them out entirely or by inflating their uncertainties to infinity.

Parameters#

i : int The time index to retrieve. x : string, optional What quantity should be retrieved as 'x'? (default = 'time') y : string, optional What quantity should be retrieved as 'y'? (default = 'flux') sigma : string, optional What quantity should be retrieved as 'sigma'? (default = 'uncertainty') minimum_acceptable_ok : float, optional The smallest value of ok that will still be included. (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data) express_badness_with_uncertainty : bool, optional If False, data that don't pass the ok cut will be removed. If True, data that don't pass the ok cut will have their uncertainties inflated to infinity (np.inf).

Returns#

x : array The time. y : array The desired quantity (default is flux) sigma : array The uncertainty on the desired quantity

Source code in chromatic/rainbows/get/timelike/subset.py
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
def get_ok_data_for_time(
    self,
    i,
    x="wavelength",
    y="flux",
    sigma="uncertainty",
    minimum_acceptable_ok=1,
    express_badness_with_uncertainty=False,
):
    """
    A small wrapper to get the good data from a time.

    Extract a slice of data, marking data that are not `ok` either
    by trimming them out entirely or by inflating their
    uncertainties to infinity.

    Parameters
    ----------
    i : int
        The time index to retrieve.
    x : string, optional
        What quantity should be retrieved as 'x'? (default = 'time')
    y : string, optional
        What quantity should be retrieved as 'y'? (default = 'flux')
    sigma : string, optional
        What quantity should be retrieved as 'sigma'? (default = 'uncertainty')
    minimum_acceptable_ok : float, optional
        The smallest value of `ok` that will still be included.
        (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
    express_badness_with_uncertainty : bool, optional
        If False, data that don't pass the `ok` cut will be removed.
        If True, data that don't pass the `ok` cut will have their
        uncertainties inflated to infinity (np.inf).

    Returns
    -------
    x : array
        The time.
    y : array
        The desired quantity (default is `flux`)
    sigma : array
        The uncertainty on the desired quantity
    """

    # get 1D independent variable
    x_values = self.get_for_time(i, x) * 1

    # get 1D array of what to keep
    ok = self.ok[:, i] >= minimum_acceptable_ok

    # get 1D array of the quantity
    y_values = self.get_for_time(i, y) * 1

    # get 1D array of uncertainty
    sigma_values = self.get_for_time(i, sigma) * 1

    if express_badness_with_uncertainty:
        sigma_values[ok == False] = np.inf
        return x_values, y_values, sigma_values
    else:
        return x_values[ok], y_values[ok], sigma_values[ok]

get_times_as_astropy(self, time=None, format=None, scale=None, is_barycentric=None) #

Convert times from a Rainbow into an astropy Time object.

Parameters#

time : Quantity, optional The time-like Quantity to be converted. If None (default), convert the time values in self.time If another time-like Quantity, convert those values. format : str, optional The time format to supply to astropy.time.Time. If None (default), format will be pulled from self.metadata['time_details']['format'] scale : str, optional The time scale to supply to astropy.time.Time. If None (default), scale will be pulled from self.metadata['time_details']['scale'] is_barycentric : bool, optional Are the times already measured relative to the Solar System barycenter? This is mostly for warning the user that it's not. If None (default), is_barycentric will be pulled from self.metadata['time_details']['is_barycentric']

Returns#

astropy_time : Time The times as an astropy Time object.

Source code in chromatic/rainbows/get/timelike/time.py
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
def get_times_as_astropy(self, time=None, format=None, scale=None, is_barycentric=None):
    """
    Convert times from a `Rainbow` into an astropy `Time` object.

    Parameters
    ----------
    time : Quantity, optional
        The time-like Quantity to be converted.
        If None (default), convert the time values in `self.time`
        If another time-like Quantity, convert those values.
    format : str, optional
        The time format to supply to astropy.time.Time.
        If None (default), format will be pulled from
        `self.metadata['time_details']['format']`
    scale : str, optional
        The time scale to supply to astropy.time.Time.
        If None (default), scale will be pulled from
        `self.metadata['time_details']['scale']`
    is_barycentric : bool, optional
        Are the times already measured relative to the
        Solar System barycenter? This is mostly for warning
        the user that it's not.
        If `None` (default), `is_barycentric` will be pulled from
        `self.metadata['time_details']['is_barycentric']`

    Returns
    -------
    astropy_time : Time
        The times as an astropy `Time` object.
    """

    # take times from self or from the keyword
    if time is None:
        time = self.time

    # give a format warning
    format = format or self.get("time_format")
    if format is None:
        cheerfully_suggest(
            f"""
        `.metadata['time_details']['format']` is not set,
        nor was a `format=` keyword argument provided.

        Since `.time` is already an astropy Quantity,
        this is likely a question of whether the format is
        'jd' or 'mjd' (= 'jd' - 2400000.5) or something else.
        If you get this wrong, you might be lost in time!

        For more about astropy.Time formats, please see:
        https://docs.astropy.org/en/stable/time/index.html#time-format
        """
        )

    # give a scale warning
    scale = scale or self.get("time_scale")
    if scale is None:
        now = Time.now()
        differences_string = ""
        for s in now.SCALES:
            dt = ((getattr(now, s).jd - now.tdb.jd) * u.day).to(u.second)
            differences_string += f"{s:>15} - tdb = {dt:10.6f}\n"
        cheerfully_suggest(
            f"""
        .metadata['time_details']['scale'] is not set,
        nor was a `scale=` keyword argument provided.

        The main question is whether the time scale is 'tdb'
        (Barycentric Dynamical Time) or something close to it,
        or 'utc' or something close to it. The differences
        between these options, at {now.utc.iso} (UTC), are:
        \n{differences_string}
        If you get this wrong, you might be lost in time!

        For more about astropy.Time scales, please see:
        https://docs.astropy.org/en/stable/time/index.html#time-scale
        """
        )

    # give some barycenter warnings
    is_barycentric = is_barycentric or self.get("time_is_barycentric")
    if is_barycentric == True and "ut" in scale.lower():
        cheerfully_suggest(
            f"""
        barycentic={is_barycentric} and scale={scale}
        It's a deeply weird combination to have a barycentric
        time measured at the Solar System barycentric but in
        Earth's leap-second-based UTC system. Please consider
        checking your time details.
        """
        )
    if is_barycentric != True:
        cheerfully_suggest(
            f"""
        The returned time is not known to be measured relative
        to the Solar System barycenter. It's probably therefore
        measured from Earth or the position of your telescope,
        but please be warned that the timing of very distant event
        (like exoplanet transits) might be off by up to about
        8 minutes (= the light travel time between Earth + Sun).
        """
        )

    # generate astropy Time array
    astropy_time = Time(self.time, format=format, scale=scale)

    # do a check that the values aren't really weird
    if (astropy_time.min().decimalyear < 1000) or (
        astropy_time.max().decimalyear > 3000
    ):
        cheerfully_suggest(
            f"""
        The times, which span
        jd={astropy_time.min().jd} to jd={astropy_time.max().jd}
        don't seem likely to be within the range of modern astronomical
        observations. Please consider double checking your time values
        and/or (format='{format}', scale='{scale}').
        """
        )

    return astropy_time

set_times_from_astropy(self, astropy_time, is_barycentric=None) #

Set the times for this Rainbow from an astropy Time object.

Parameters#

astropy_time : Time The times as an astropy Time object. is_barycentric : bool, optional Are the times already measured relative to the Solar System barycenter? This is mostly for warning the user that it's not. Options are True, False, None (= don't know).

Returns#

time : Quantity An astropy Quantity with units of time, expressing the Time as julian day. In addition to this returned variable, the function sets the following internal variables: self.time # (= the astropy Quantity of times) self.metadata['time_format'] # (= the format to convert back to Time) self.metadata['time_scale'] # (= the scale to convert back to Time) self.metadata['time_is_barycentric'] # (= is it barycentric?)

Source code in chromatic/rainbows/get/timelike/time.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
def set_times_from_astropy(self, astropy_time, is_barycentric=None):
    """
    Set the times for this `Rainbow` from an astropy `Time` object.

    Parameters
    ----------
    astropy_time : Time
        The times as an astropy `Time` object.
    is_barycentric : bool, optional
        Are the times already measured relative to the
        Solar System barycenter? This is mostly for warning
        the user that it's not. Options are True, False,
        None (= don't know).

    Returns
    -------
    time : Quantity
        An astropy Quantity with units of time,
        expressing the Time as julian day.
        In addition to this returned variable,
        the function sets the following internal
        variables:
        ```
        self.time # (= the astropy Quantity of times)
        self.metadata['time_format'] # (= the format to convert back to Time)
        self.metadata['time_scale'] # (= the scale to convert back to Time)
        self.metadata['time_is_barycentric'] # (= is it barycentric?)
        ```
    """

    # set the formats
    format = "jd"
    unit = u.day
    scale = "tdb"

    # store the necessary values
    self.timelike["time"] = getattr(getattr(astropy_time, scale), format) * unit
    self.metadata["time_format"] = format
    self.metadata["time_scale"] = scale
    self.metadata["time_is_barycentric"] = is_barycentric

    # do some accounting to sync everything together
    self._guess_tscale()
    self._make_sure_time_edges_are_defined()
    return self.time

🌈 Get/Wavelike#

get_average_spectrum(self) #

Return a average_spectrum of the star, averaged over all times.

This uses bin, which is a horribly slow way of doing what is fundamentally a very simply array calculation, because we don't need to deal with partial pixels.

Returns#

average_spectrum : array Wavelike array of average spectrum.

Source code in chromatic/rainbows/get/wavelike/average_spectrum.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
def get_average_spectrum(self):
    """
    Return a average_spectrum of the star, averaged over all times.

    This uses `bin`, which is a horribly slow way of doing what is
    fundamentally a very simply array calculation, because we
    don't need to deal with partial pixels.

    Returns
    -------
    average_spectrum : array
        Wavelike array of average spectrum.
    """
    return self.get_average_spectrum_as_rainbow().flux[:, 0]

get_expected_uncertainty(self, function=np.nanmedian, *args, **kw) #

Get the typical per-wavelength uncertainty.

Parameters#

function : function, optional What function should be used to choose the "typical" value for each wavelength? Good options are probably things like np.nanmedian, np.median, np.nanmean np.mean, np.percentile args : list, optional Addition arguments will be passed to function *kw : dict, optional Additional keyword arguments will be passed to function

Returns#

uncertainty_per_wavelength : array The uncertainty associated with each wavelength.

Source code in chromatic/rainbows/get/wavelike/expected_uncertainty.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def get_expected_uncertainty(self, function=np.nanmedian, *args, **kw):
    """
    Get the typical per-wavelength uncertainty.

    Parameters
    ----------
    function : function, optional
        What function should be used to choose the "typical"
        value for each wavelength? Good options are probably
        things like `np.nanmedian`, `np.median`, `np.nanmean`
        `np.mean`, `np.percentile`
    *args : list, optional
        Addition arguments will be passed to `function`
    **kw : dict, optional
        Additional keyword arguments will be passed to `function`

    Returns
    -------
    uncertainty_per_wavelength : array
        The uncertainty associated with each wavelength.
    """
    uncertainty_per_wavelength = function(
        self.uncertainty, *args, axis=self.timeaxis, **kw
    )
    return uncertainty_per_wavelength

get_measured_scatter_in_bins(self, ntimes=2, nbins=4, method='standard-deviation', minimum_acceptable_ok=1e-10) #

Get measured scatter in time bins of increasing sizes.

For uncorrelated Gaussian noise, the scatter should decrease as 1/sqrt(N), where N is the number points in a bin. This function calculates the scatter for a range of N, thus providing a quick test for correlated noise.

Parameters#

ntimes : int How many times should be binned together? Binning will continue recursively until fewer that nbins would be left. nbins : int What's the smallest number of bins that should be used to calculate a scatter? The absolute minimum is 2. method : string What method to use to obtain measured scatter. Current options are 'MAD', 'standard-deviation'. minimum_acceptable_ok : float The smallest value of ok that will still be included. (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)

Returns#

scatter_dictionary : dict Dictionary with lots of information about scatter in bins per wavelength.

Source code in chromatic/rainbows/get/wavelike/measured_scatter_in_bins.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def get_measured_scatter_in_bins(
    self, ntimes=2, nbins=4, method="standard-deviation", minimum_acceptable_ok=1e-10
):
    """
    Get measured scatter in time bins of increasing sizes.

    For uncorrelated Gaussian noise, the scatter should
    decrease as 1/sqrt(N), where N is the number points
    in a bin. This function calculates the scatter for
    a range of N, thus providing a quick test for
    correlated noise.

    Parameters
    ----------
    ntimes : int
        How many times should be binned together? Binning will
        continue recursively until fewer that nbins would be left.
    nbins : int
        What's the smallest number of bins that should be used to
        calculate a scatter? The absolute minimum is 2.
    method : string
        What method to use to obtain measured scatter. Current options are 'MAD', 'standard-deviation'.
    minimum_acceptable_ok : float
        The smallest value of `ok` that will still be included.
        (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)

    Returns
    -------
    scatter_dictionary : dict
        Dictionary with lots of information about scatter in bins per wavelength.
    """

    from ...rainbow import Rainbow

    if "remove_trends" in self.history():
        cheerfully_suggest(
            f"""
        The `remove_trends` function was applied to this `Rainbow`,
        making it very plausible that some long-timescale signals
        and/or noise have been suppressed. Be suspicious of binned
        scatters on long timescales.
        """
        )

    # create a simplified rainbow so we don't waste time binning
    simple = Rainbow(
        time=self.time,
        wavelength=self.wavelength,
        flux=self.flux,
        uncertainty=self.uncertainty,
        ok=self.ok,
    )

    # loop through binning until done
    binnings = [simple]
    N = [1]
    while binnings[-1].ntime > ntimes * nbins:
        binnings.append(
            binnings[-1].bin(ntimes=ntimes, minimum_acceptable_ok=minimum_acceptable_ok)
        )
        N.append(N[-1] * ntimes)

    scatters = [b.get_measured_scatter(method=method) for b in binnings]
    expectation = [b.get_expected_uncertainty() for b in binnings]
    uncertainty_on_scatters = (
        scatters
        / np.sqrt(2 * (np.array([b.ntime for b in binnings]) - 1))[:, np.newaxis]
    )
    dt = [np.median(np.diff(b.time)) for b in binnings]

    return dict(
        N=np.array(N),
        dt=u.Quantity(dt),
        scatters=np.transpose(scatters),
        expectation=np.transpose(expectation),
        uncertainty=np.transpose(uncertainty_on_scatters),
    )

get_measured_scatter(self, quantity='flux', method='standard-deviation', minimum_acceptable_ok=1e-10) #

Get measured scatter for each wavelength.

Calculate the standard deviation (or outlier-robust equivalent) for each wavelength, which can be compared to the expected per-wavelength uncertainty.

Parameters#

quantity : string, optional The fluxlike quantity for which we should calculate the scatter. method : string, optional What method to use to obtain measured scatter. Current options are 'MAD', 'standard-deviation'. minimum_acceptable_ok : float, optional The smallest value of ok that will still be included. (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)

Returns#

scatter : array Wavelike array of measured scatters.

Source code in chromatic/rainbows/get/wavelike/measured_scatter.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
def get_measured_scatter(
    self, quantity="flux", method="standard-deviation", minimum_acceptable_ok=1e-10
):
    """
    Get measured scatter for each wavelength.

    Calculate the standard deviation (or outlier-robust
    equivalent) for each wavelength, which can be compared
    to the expected per-wavelength uncertainty.

    Parameters
    ----------
    quantity : string, optional
        The `fluxlike` quantity for which we should calculate the scatter.
    method : string, optional
        What method to use to obtain measured scatter.
        Current options are 'MAD', 'standard-deviation'.
    minimum_acceptable_ok : float, optional
        The smallest value of `ok` that will still be included.
        (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)

    Returns
    -------
    scatter : array
        Wavelike array of measured scatters.
    """

    if method not in ["standard-deviation", "MAD"]:
        cheerfully_suggest(
            f"""
        '{method}' is not an available method.
        Please choose from ['MAD', 'standard-deviation'].
        """
        )
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")

        scatters = np.zeros(self.nwave)
        for i in range(self.nwave):
            x, y, sigma = self.get_ok_data_for_wavelength(
                i, y=quantity, minimum_acceptable_ok=minimum_acceptable_ok
            )
            if u.Quantity(y).unit == u.Unit(""):
                y_value, y_unit = y, 1
            else:
                y_value, y_unit = y.value, y.unit
            if method == "standard-deviation":
                scatters[i] = np.nanstd(y_value)
            elif method == "MAD":
                scatters[i] = mad_std(y_value, ignore_nan=True)
        return scatters * y_unit

get_median_spectrum(self) #

Return a spectrum of the star, medianed over all times.

Returns#

median_spectrum : array Wavelike array of fluxes.

Source code in chromatic/rainbows/get/wavelike/median_spectrum.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
def get_median_spectrum(self):
    """
    Return a spectrum of the star, medianed over all times.

    Returns
    -------
    median_spectrum : array
        Wavelike array of fluxes.
    """
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        return np.nanmedian(self.get_ok_data(), axis=1)

get_spectral_resolution(self, pixels_per_resolution_element=1) #

Estimate the R=w/dw spectral resolution.

Higher spectral resolutions correspond to more wavelength points within a particular interval. By default, it's estimated for the interval between adjacent wavelength bins. In unbinned data coming directly from a telescope, there's a good chance that adjacent pixels both sample the same resolution element as blurred by the telescope optics, so the pixels_per_resolution_element keyword should likely be larger than 1.

Parameters#

pixels_per_resolution_element : float, optional How many pixels do we consider as a resolution element?

Returns#

R : array The spectral resolution at each wavelength.

Source code in chromatic/rainbows/get/wavelike/spectral_resolution.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
def get_spectral_resolution(self, pixels_per_resolution_element=1):
    """
    Estimate the R=w/dw spectral resolution.

    Higher spectral resolutions correspond to more wavelength
    points within a particular interval. By default, it's
    estimated for the interval between adjacent wavelength
    bins. In unbinned data coming directly from a telescope,
    there's a good chance that adjacent pixels both sample
    the same resolution element as blurred by the telescope
    optics, so the `pixels_per_resolution_element` keyword
    should likely be larger than 1.

    Parameters
    ----------
    pixels_per_resolution_element : float, optional
        How many pixels do we consider as a resolution element?

    Returns
    -------
    R : array
        The spectral resolution at each wavelength.
    """

    # calculate spectral resolution, for this pixels/element
    w = self.wavelength
    dw = np.gradient(self.wavelength)
    R = np.abs(w / dw / pixels_per_resolution_element)

    return R

get_for_wavelength(self, i, quantity='flux') #

Get 'quantity' associated with wavelength 'i'.

Parameters#

i : int The wavelength index to retrieve. quantity : string The quantity to retrieve. If it is flux-like, row 'i' will be returned. If it is time-like, the array itself will be returned.

Returns#

quantity : array, Quantity The 1D array of 'quantity' corresponding to wavelength 'i'.

Source code in chromatic/rainbows/get/wavelike/subset.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
def get_for_wavelength(self, i, quantity="flux"):
    """
    Get `'quantity'` associated with wavelength `'i'`.

    Parameters
    ----------
    i : int
        The wavelength index to retrieve.
    quantity : string
        The quantity to retrieve. If it is flux-like,
        row 'i' will be returned. If it is time-like,
        the array itself will be returned.

    Returns
    -------
    quantity : array, Quantity
        The 1D array of 'quantity' corresponding to wavelength 'i'.
    """
    z = self.get(quantity)
    if np.shape(z) == self.shape:
        return z[i, :]
    elif len(z) == self.ntime:
        return z
    else:
        raise RuntimeError(
            f"""
        You tried to retrieve wavelength {i} from '{quantity}',
        but this quantity is neither flux-like nor time-like.
        It's not possible to return a time-like array. Sorry!
        """
        )

get_ok_data_for_wavelength(self, i, x='time', y='flux', sigma='uncertainty', minimum_acceptable_ok=1, express_badness_with_uncertainty=False) #

A small wrapper to get the good data from a wavelength.

Extract a slice of data, marking data that are not ok either by trimming them out entirely or by inflating their uncertainties to infinity.

Parameters#

i : int The wavelength index to retrieve. x : string, optional What quantity should be retrieved as 'x'? (default = 'time') y : string, optional What quantity should be retrieved as 'y'? (default = 'flux') sigma : string, optional What quantity should be retrieved as 'sigma'? (default = 'uncertainty') minimum_acceptable_ok : float, optional The smallest value of ok that will still be included. (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data) express_badness_with_uncertainty : bool, optional If False, data that don't pass the ok cut will be removed. If True, data that don't pass the ok cut will have their uncertainties inflated to infinity (np.inf).

Returns#

x : array The time. y : array The desired quantity (default is flux) sigma : array The uncertainty on the desired quantity

Source code in chromatic/rainbows/get/wavelike/subset.py
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
def get_ok_data_for_wavelength(
    self,
    i,
    x="time",
    y="flux",
    sigma="uncertainty",
    minimum_acceptable_ok=1,
    express_badness_with_uncertainty=False,
):
    """
    A small wrapper to get the good data from a wavelength.

    Extract a slice of data, marking data that are not `ok` either
    by trimming them out entirely or by inflating their
    uncertainties to infinity.

    Parameters
    ----------
    i : int
        The wavelength index to retrieve.
    x : string, optional
        What quantity should be retrieved as 'x'? (default = 'time')
    y : string, optional
        What quantity should be retrieved as 'y'? (default = 'flux')
    sigma : string, optional
        What quantity should be retrieved as 'sigma'? (default = 'uncertainty')
    minimum_acceptable_ok : float, optional
        The smallest value of `ok` that will still be included.
        (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
    express_badness_with_uncertainty : bool, optional
        If False, data that don't pass the `ok` cut will be removed.
        If True, data that don't pass the `ok` cut will have their
        uncertainties inflated to infinity (np.inf).

    Returns
    -------
    x : array
        The time.
    y : array
        The desired quantity (default is `flux`)
    sigma : array
        The uncertainty on the desired quantity
    """

    # get 1D independent variable
    x_values = self.get_for_wavelength(i, x) * 1

    # get 1D array of what to keep
    ok = self.ok[i, :] >= minimum_acceptable_ok

    # get 1D array of the quantity
    y_values = self.get_for_wavelength(i, y) * 1

    # get 1D array of uncertainty
    sigma_values = self.get_for_wavelength(i, sigma) * 1

    if express_badness_with_uncertainty:
        sigma_values[ok == False] = np.inf
        return x_values, y_values, sigma_values
    else:
        return x_values[ok], y_values[ok], sigma_values[ok]

🌈 Visualizations#

animate_lightcurves(self, filename='animated-lightcurves.gif', fps=5, dpi=None, bitrate=None, **kwargs) #

Create an animation to show how the lightcurve changes as we flip through every wavelength.

Parameters#

filename : str Name of file you'd like to save results in. Currently supports only .gif or .html files. fps : float frames/second of animation ax : Axes The axes into which this animated plot should go. xlim : tuple Custom xlimits for the plot ylim : tuple Custom ylimits for the plot cmap : str, The color map to use for expressing wavelength vmin : Quantity The minimum value to use for the wavelength colormap vmax : Quantity The maximum value to use for the wavelength colormap scatterkw : dict A dictionary of keywords to be passed to plt.scatter so you can have more detailed control over the plot appearance. Common keyword arguments might include: [s, c, marker, alpha, linewidths, edgecolors, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html textkw : dict A dictionary of keywords passed to plt.text so you can have more detailed control over the text appearance. Common keyword arguments might include: [alpha, backgroundcolor, color, fontfamily, fontsize, fontstyle, fontweight, rotation, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html

Source code in chromatic/rainbows/visualizations/animate.py
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
def animate_lightcurves(
    self,
    filename="animated-lightcurves.gif",
    fps=5,
    dpi=None,
    bitrate=None,
    **kwargs,
):
    """
    Create an animation to show how the lightcurve changes
    as we flip through every wavelength.

    Parameters
    ----------
    filename : str
        Name of file you'd like to save results in.
        Currently supports only .gif or .html files.
    fps : float
        frames/second of animation
    ax : Axes
        The axes into which this animated plot should go.
    xlim : tuple
        Custom xlimits for the plot
    ylim : tuple
        Custom ylimits for the plot
    cmap : str,
        The color map to use for expressing wavelength
    vmin : Quantity
        The minimum value to use for the wavelength colormap
    vmax : Quantity
        The maximum value to use for the wavelength colormap
    scatterkw : dict
        A dictionary of keywords to be passed to `plt.scatter`
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[s, c, marker, alpha, linewidths, edgecolors, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
    textkw : dict
        A dictionary of keywords passed to `plt.text`
        so you can have more detailed control over the text
        appearance. Common keyword arguments might include:
        `[alpha, backgroundcolor, color, fontfamily, fontsize,
          fontstyle, fontweight, rotation, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
    """
    self._setup_animate_lightcurves(**kwargs)

    filename = self._label_plot_file(filename)

    # initialize the animator
    writer, displayer = get_animation_writer_and_displayer(
        filename=filename, fps=fps, bitrate=bitrate
    )

    # set up to save frames directly into the animation
    figure = self._animate_lightcurves_components["fi"]
    with writer.saving(figure, filename, dpi or figure.get_dpi()):
        for i in tqdm(range(self.nwave), leave=False):
            self._animate_lightcurves_components["update"](i)
            writer.grab_frame()

    # close the figure that was created
    plt.close(figure)

    # display the animation
    from IPython.display import display

    try:
        display(displayer(filename, embed=True))
    except TypeError:
        display(displayer(filename))

animate_spectra(self, filename='animated-spectra.gif', fps=5, dpi=None, bitrate=None, **kw) #

Create an animation to show how the spectrum changes as we flip through every timepoint.

Parameters#

filename : str Name of file you'd like to save results in. Currently supports only .gif files. ax : Axes The axes into which this animated plot should go. fps : float frames/second of animation xlim : tuple Custom xlimits for the plot ylim : tuple Custom ylimits for the plot cmap : str, The color map to use for expressing wavelength vmin : Quantity The minimum value to use for the wavelength colormap vmax : Quantity The maximum value to use for the wavelength colormap scatterkw : dict A dictionary of keywords to be passed to plt.scatter so you can have more detailed control over the plot appearance. Common keyword arguments might include: [s, c, marker, alpha, linewidths, edgecolors, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html textkw : dict A dictionary of keywords passed to plt.text so you can have more detailed control over the text appearance. Common keyword arguments might include: [alpha, backgroundcolor, color, fontfamily, fontsize, fontstyle, fontweight, rotation, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html

Source code in chromatic/rainbows/visualizations/animate.py
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
def animate_spectra(
    self, filename="animated-spectra.gif", fps=5, dpi=None, bitrate=None, **kw
):
    """
    Create an animation to show how the spectrum changes
    as we flip through every timepoint.

    Parameters
    ----------
    filename : str
        Name of file you'd like to save results in.
        Currently supports only .gif files.
    ax : Axes
        The axes into which this animated plot should go.
    fps : float
        frames/second of animation
    xlim : tuple
        Custom xlimits for the plot
    ylim : tuple
        Custom ylimits for the plot
    cmap : str,
        The color map to use for expressing wavelength
    vmin : Quantity
        The minimum value to use for the wavelength colormap
    vmax : Quantity
        The maximum value to use for the wavelength colormap
    scatterkw : dict
        A dictionary of keywords to be passed to `plt.scatter`
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[s, c, marker, alpha, linewidths, edgecolors, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
    textkw : dict
        A dictionary of keywords passed to `plt.text`
        so you can have more detailed control over the text
        appearance. Common keyword arguments might include:
        `[alpha, backgroundcolor, color, fontfamily, fontsize,
          fontstyle, fontweight, rotation, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
    """

    self._setup_animate_spectra(**kw)

    filename = self._label_plot_file(filename)
    # initialize the animator
    writer, displayer = get_animation_writer_and_displayer(
        filename=filename, fps=fps, bitrate=bitrate
    )

    # set up to save frames directly into the animation
    figure = self._animate_spectra_components["fi"]
    with writer.saving(figure, filename, dpi or figure.get_dpi()):
        for i in tqdm(range(self.ntime), leave=False):
            self._animate_spectra_components["update"](i)
            writer.grab_frame()

    # close the figure that was created
    plt.close(figure)

    # display the animation
    from IPython.display import display

    display(displayer(filename))

get_wavelength_color(self, wavelength) #

Determine the color corresponding to one or more wavelengths.

Parameters#

wavelength : Quantity The wavelength value(s), either an individual wavelength or an array of N wavelengths.

Returns#

colors : array An array of RGBA colors [or an (N,4) array].

Source code in chromatic/rainbows/visualizations/colors.py
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def get_wavelength_color(self, wavelength):
    """
    Determine the color corresponding to one or more wavelengths.

    Parameters
    ----------
    wavelength : Quantity
        The wavelength value(s), either an individual
        wavelength or an array of N wavelengths.

    Returns
    -------
    colors : array
        An array of RGBA colors [or an (N,4) array].
    """
    w_unitless = wavelength.to("micron").value
    normalized_w = self.norm(w_unitless)
    return self.cmap(normalized_w)

setup_wavelength_colors(self, cmap=None, vmin=None, vmax=None, log=None) #

Set up a color map and normalization function for colors datapoints by their wavelengths.

Parameters#

cmap : str, Colormap The color map to use. vmin : Quantity The wavelength at the bottom of the cmap. vmax : Quantity The wavelength at the top of the cmap. log : bool If True, colors will scale with log(wavelength). If False, colors will scale with wavelength. If None, the scale will be guessed from the internal wscale.

Source code in chromatic/rainbows/visualizations/colors.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
def setup_wavelength_colors(self, cmap=None, vmin=None, vmax=None, log=None):
    """
    Set up a color map and normalization function for
    colors datapoints by their wavelengths.

    Parameters
    ----------
    cmap : str, Colormap
        The color map to use.
    vmin : Quantity
        The wavelength at the bottom of the cmap.
    vmax : Quantity
        The wavelength at the top of the cmap.
    log : bool
        If True, colors will scale with log(wavelength).
        If False, colors will scale with wavelength.
        If None, the scale will be guessed from the internal wscale.
    """

    # populate the cmap object
    self.cmap = plt.colormaps.get_cmap(cmap)

    vmin = vmin
    if vmin is None:
        vmin = np.nanmin(self.wavelength)
    vmax = vmax
    if vmax is None:
        vmax = np.nanmax(self.wavelength)

    if (self.wscale in ["log"]) or (log == True):
        self.norm = col.LogNorm(
            vmin=vmin.to("micron").value, vmax=vmax.to("micron").value
        )
    if (self.wscale in ["?", "linear"]) or (log == False):
        self.norm = col.Normalize(
            vmin=vmin.to("micron").value, vmax=vmax.to("micron").value
        )

Paint a 2D image of flux as a function of time and wavelength, using plt.imshow where pixels will have constant size.

Parameters#

ax : Axes, optional The axes into which to make this plot. quantity : str, optional The fluxlike quantity to imshow. (Must be a key of rainbow.fluxlike). w_unit : str, Unit, optional The unit for plotting wavelengths. t_unit : str, Unit, optional The unit for plotting times. colorbar : bool, optional Should we include a colorbar? aspect : str, optional What aspect ratio should be used for the imshow? mask_ok : bool, optional Should we mark which data are not OK? color_ok : str, optional The color to be used for masking data points that are not OK. alpha_ok : float, optional The transparency to be used for masking data points that are not OK. use_pcolormesh : bool If the grid is non-uniform, should jump to using pcolormesh instead? Leaving this at the default of True will give the best chance of having real Wavelength and Time axes; setting it to False will end up showing Wavelength Index or Time Index instead (if non-uniform). **kw : dict, optional All other keywords will be passed on to plt.imshow, so you can have more detailed control over the plot appearance. Common keyword arguments might include: [cmap, norm, interpolation, alpha, vmin, vmax] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html

Source code in chromatic/rainbows/visualizations/imshow.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
def imshow(
    self,
    ax=None,
    quantity="flux",
    xaxis="time",
    w_unit="micron",
    t_unit="day",
    colorbar=True,
    aspect="auto",
    mask_ok=True,
    color_ok="tomato",
    alpha_ok=0.8,
    vmin=None,
    vmax=None,
    filename=None,
    use_pcolormesh=True,
    **kw,
):
    """
    Paint a 2D image of flux as a function of time and wavelength,
    using `plt.imshow` where pixels will have constant size.

    Parameters
    ----------
    ax : Axes, optional
        The axes into which to make this plot.
    quantity : str, optional
        The fluxlike quantity to imshow.
        (Must be a key of `rainbow.fluxlike`).
    w_unit : str, Unit, optional
        The unit for plotting wavelengths.
    t_unit : str, Unit, optional
        The unit for plotting times.
    colorbar : bool, optional
        Should we include a colorbar?
    aspect : str, optional
        What aspect ratio should be used for the imshow?
    mask_ok : bool, optional
        Should we mark which data are not OK?
    color_ok : str, optional
        The color to be used for masking data points that are not OK.
    alpha_ok : float, optional
        The transparency to be used for masking data points that are not OK.
    use_pcolormesh : bool
        If the grid is non-uniform, should jump to using `pcolormesh` instead?
        Leaving this at the default of True will give the best chance of
        having real Wavelength and Time axes; setting it to False will
        end up showing Wavelength Index or Time Index instead (if non-uniform).
    **kw : dict, optional
        All other keywords will be passed on to `plt.imshow`,
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[cmap, norm, interpolation, alpha, vmin, vmax]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html
    """

    # self.speak(f'imshowing')
    if ax is None:
        ax = plt.subplot()

    # get units
    w_unit, t_unit = u.Unit(w_unit), u.Unit(t_unit)

    # make sure some wavelength and time edges are defined
    self._make_sure_wavelength_edges_are_defined()
    self._make_sure_time_edges_are_defined()

    # set up the wavelength extent
    try:
        wmin = self.wavelength_lower[0].to_value(w_unit)
        wmax = self.wavelength_upper[-1].to_value(w_unit)
    except AttributeError:
        wmin, wmax = None, None

    # define pcolormesh inputs, in case we need to use them below
    if use_pcolormesh:
        pcolormesh_inputs = dict(
            ax=ax,
            quantity=quantity,
            xaxis=xaxis,
            w_unit=w_unit,
            t_unit=t_unit,
            colorbar=colorbar,
            mask_ok=mask_ok,
            color_ok=color_ok,
            alpha_ok=alpha_ok,
            vmin=vmin,
            vmax=vmax,
            filename=filename,
            **kw,
        )

    if (self.wscale == "linear") and (wmin is not None) and (wmax is not None):
        wlower, wupper = wmin, wmax
        wlabel = f"{self._wave_label} ({w_unit.to_string('latex_inline')})"
    elif self.wscale == "log" and (wmin is not None) and (wmax is not None):
        wlower, wupper = np.log10(wmin), np.log10(wmax)
        wlabel = (
            r"log$_{10}$" + f"[{self._wave_label}/({w_unit.to_string('latex_inline')})]"
        )
    else:
        if use_pcolormesh:
            self.pcolormesh(**pcolormesh_inputs)
            return
        message = f"""
        The wavelength scale for this rainbow is '{self.wscale}',
        and there are {self.nwave} wavelength centers and
        {len(self.wavelike.get('wavelength_lower', []))} wavelength edges defined.

        It's hard to imshow something with a wavelength axis
        that isn't linearly or logarithmically uniform, or doesn't
        at least have its wavelength edges defined. We're giving up
        and just using the wavelength index as the wavelength axis.

        If you want a real wavelength axis, one solution would be
        to use `rainbow.pcolormesh()` instead of `rainbow.imshow()`.
        It takes basically the same inputs but can handle non-uniform
        grids.

        Or, you could bin your wavelengths to a more uniform grid with
        `binned = rainbow.bin(R=...)` (for logarithmic wavelengths)
        or `binned = rainbow.bin(dw=...)` (for linear wavelengths)
        and then `binned.imshow()` will give more informative axes.
        """
        cheerfully_suggest(message)
        wlower, wupper = -0.5, self.nwave - 0.5
        wlabel = "Wavelength Index"

    # set up the time extent
    try:
        tmin = self.time_lower[0].to_value(t_unit)
        tmax = self.time_upper[-1].to_value(t_unit)
    except AttributeError:
        tmin, tmax = None, None
    if (self.tscale == "linear") and (tmin is not None) and (tmax is not None):
        tlower, tupper = tmin, tmax
        tlabel = f"{self._time_label} ({t_unit.to_string('latex_inline')})"
    elif self.tscale == "log" and (tmin is not None) and (tmax is not None):
        tlower, tupper = np.log10(tmin), np.log10(tmax)
        tlabel = (
            r"log$_{10}$" + f"[{self._time_label}/({t_unit.to_string('latex_inline')})]"
        )
    else:
        if use_pcolormesh:
            self.pcolormesh(**pcolormesh_inputs)
            return
        message = f"""
        The time scale for this rainbow is '{self.tscale}',
        and there are {self.ntime} time centers and
        {len(self.timelike.get('time_lower', []))} time edges defined.

        It's hard to imshow something with a time axis
        that isn't linearly or logarithmically uniform, or doesn't
        at least have its time edges defined. We're giving up
        and just using the time index as the time axis.

        If you want a real time axis, one solution would be
        to use `rainbow.pcolormesh()` instead of `rainbow.imshow()`.
        It takes basically the same inputs but can handle non-uniform
        grids.

        Or, you could bin your times to a more uniform grid with
        `binned = rainbow.bin(dt=...)` (for linear times) and then
        `binned.imshow()` will give more informative axes.
        """
        cheerfully_suggest(message)
        tlower, tupper = -0.5, self.ntime - 0.5
        tlabel = "Time Index"

    def get_2D(k):
        """
        A small helper to get a 2D quantity. This is a bit of
        a kludge to help with weird cases of duplicate keys
        (for example where 'wavelength' might appear in both
        `wavelike` and `fluxlike`).
        """
        z = self.get(k)
        if np.shape(z) == self.shape:
            return z
        else:
            return self.fluxlike.get(k, None)

    # choose between time and wavelength on the x-axis
    if xaxis.lower()[0] == "t":
        self.metadata["_imshow_extent"] = [tlower, tupper, wupper, wlower]
        xlabel, ylabel = tlabel, wlabel
        z = get_2D(quantity)
        ok = get_2D("ok")
    elif xaxis.lower()[0] == "w":
        self.metadata["_imshow_extent"] = [wlower, wupper, tupper, tlower]
        xlabel, ylabel = wlabel, tlabel
        z = get_2D(quantity).T
        ok = get_2D("ok").T
    else:
        cheerfully_suggest(
            "Please specify either `xaxis='time'` or `xaxis='wavelength'` for `.plot()`"
        )

    # figure out a good shared color limits (unless already supplied)
    vmin = vmin or np.nanpercentile(remove_unit(z).flatten() * 1.0, 1)
    vmax = vmax or np.nanpercentile(remove_unit(z).flatten() * 1.0, 99)

    # define some default keywords
    imshow_kw = dict(interpolation="nearest", vmin=vmin, vmax=vmax)
    imshow_kw.update(**kw)
    with quantity_support():
        plt.sca(ax)

        # create an overlaying mask of which data are OK or not
        if mask_ok:
            okimshow_kw = dict(**imshow_kw)
            okimshow_kw.update(
                cmap=one2another(
                    bottom=color_ok,
                    top=color_ok,
                    alpha_bottom=alpha_ok,
                    alpha_top=0,
                ),
                zorder=10,
                vmin=0,
                vmax=1,
            )
            plt.imshow(
                remove_unit(ok),
                extent=self.metadata["_imshow_extent"],
                aspect=aspect,
                origin="upper",
                **okimshow_kw,
            )
        plt.imshow(
            remove_unit(z),
            extent=self.metadata["_imshow_extent"],
            aspect=aspect,
            origin="upper",
            **imshow_kw,
        )
        plt.ylabel(ylabel)
        plt.xlabel(xlabel)
        if colorbar:
            plt.colorbar(
                ax=ax,
                label=u.Quantity(z).unit.to_string("latex_inline"),
            )
        plt.title(self.get("title"))

    if filename is not None:
        self.savefig(filename)
    return ax

imshow_interact(self, quantity='Flux', t_unit='d', w_unit='micron', cmap='viridis', ylim=[], ylog=None, filename=None) #

Display interactive spectrum plot for chromatic Rainbow with a wavelength-averaged 2D quantity defined by the user. The user can interact with the 3D spectrum to choose the wavelength range over which the average is calculated.

Parameters#

self : Rainbow object chromatic Rainbow object to plot quantity : str (optional, default='flux') The quantity to imshow, currently either flux or uncertainty ylog : boolean (optional, default=None) Boolean for whether to take log10 of the y-axis data. If None, will be guessed from the data. t_unit : str (optional, default='d') The time unit to use (seconds, minutes, hours, days etc.) w_unit : str (optional, default='micron') The wavelength unit to use cmap : str (optional, default='viridis') The color scheme to use from Vega documentation ylim : list (optional, default=[]) If the user wants to define their own ylimits on the lightcurve plot

Source code in chromatic/rainbows/visualizations/interactive.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def imshow_interact(
    self,
    quantity="Flux",
    t_unit="d",
    w_unit="micron",
    cmap="viridis",
    ylim=[],
    ylog=None,
    filename=None,
):
    """
    Display interactive spectrum plot for chromatic Rainbow with a
    wavelength-averaged 2D quantity defined by the user. The user
    can interact with the 3D spectrum to choose the wavelength range
    over which the average is calculated.

    Parameters
    ----------
    self : Rainbow object
        chromatic Rainbow object to plot
    quantity : str
        (optional, default='flux')
        The quantity to imshow, currently either `flux` or `uncertainty`
    ylog : boolean
        (optional, default=None)
        Boolean for whether to take log10 of the y-axis data.
        If None, will be guessed from the data.
    t_unit : str
        (optional, default='d')
        The time unit to use (seconds, minutes, hours, days etc.)
    w_unit : str
        (optional, default='micron')
        The wavelength unit to use
    cmap : str
        (optional, default='viridis')
        The color scheme to use from Vega documentation
    ylim : list
        (optional, default=[])
        If the user wants to define their own ylimits on the lightcurve plot
    """

    # preset the x and y axes as Time (in units defined by the user) and Wavelength
    xlabel = f"Time ({t_unit})"
    ylabel = f"Wavelength ({w_unit})"

    # allow the user to plot flux or uncertainty
    if quantity.lower() == "flux":
        z = "Flux"
    elif quantity.lower() == "uncertainty":
        z = "Flux Uncertainty"
    elif quantity.lower() == "error":
        z = "Flux Uncertainty"
    elif quantity.lower() == "flux_error":
        z = "Flux Uncertainty"
    elif quantity.lower() == "flux_uncertainty":
        z = "Flux Uncertainty"
    else:
        # if the quantity is not one of the predefined values:
        cheerfully_suggest("Unrecognised Quantity!")
        return

    # convert rainbow object to pandas dataframe
    source = self.to_df(t_unit=t_unit, w_unit=w_unit)[[xlabel, ylabel, z]]

    # if there are >10,000 data points Altair will be very laggy/slow. This is probably unbinned, therefore
    # encourage the user to bin the Rainbow before calling this function in future/
    N_warning = 100000
    if len(source) > N_warning:
        cheerfully_suggest(
            f"""
        The dataset {self} has >{N_warning} data points.
        The interactive plot may lag. Try binning first!
        """
        )

    if (self._is_probably_normalized() == False) and "model" not in self.fluxlike:
        cheerfully_suggest(
            """
        It looks like you might be trying to use `imshow_interact` with an
        unnormalized Rainbow object. You might consider normalizing first
        with `rainbow.normalize().imshow_interact()`.
        """
        )

    # The unbinned Rainbow is sometimes in log scale, therefore plotting will be ugly with uniform axis spacing
    # ylog tells the function to take the log10 of the y-axis data
    try:
        ylog = ylog or (self.wscale == "log")
    except AttributeError:
        ylog = ylog or False

    if ylog:
        source[ylabel] = np.log10(source[ylabel])
        source = source.rename(columns={ylabel: f"log10({ylabel})"})
        ylabel = f"log10({ylabel})"

    if len(ylim) > 0:
        domain = ylim
    else:
        domain = [
            np.percentile(source[z], 2) - 0.001,
            np.percentile(source[z], 98) + 0.001,
        ]

    with warnings.catch_warnings():
        warnings.simplefilter("ignore")

        # Add interactive part
        brush = alt.selection(type="interval", encodings=["y"])

        # Define the 3D spectrum plot
        spectrum = (
            alt.Chart(source, width=280, height=230)
            .mark_rect(
                clip=False,
                width=280 / len(self.timelike["time"]),
                height=230 / len(self.wavelike["wavelength"]),
            )
            .encode(
                x=alt.X(
                    f"{xlabel}:Q",
                    scale=alt.Scale(
                        zero=False,
                        nice=False,
                        domain=[np.min(source[xlabel]), np.max(source[xlabel])],
                    ),
                ),
                y=alt.Y(
                    f"{ylabel}:Q",
                    scale=alt.Scale(
                        zero=False,
                        nice=False,
                        domain=[np.max(source[ylabel]), np.min(source[ylabel])],
                    ),
                ),
                fill=alt.Color(
                    f"{z}:Q",
                    scale=alt.Scale(
                        scheme=cmap,
                        zero=False,
                        domain=domain,
                    ),
                ),
                tooltip=[f"{xlabel}", f"{ylabel}", f"{z}"],
            )
        )

        # gray out the background with selection
        background = spectrum.encode(color=alt.value("#ddd")).add_selection(brush)

        # highlights on the transformed data
        highlight = spectrum.transform_filter(brush)

        # Layer the various plotting parts
        spectrum_int = alt.layer(background, highlight, data=source)

        # Add the 2D averaged lightcurve (or uncertainty)
        lightcurve = (
            alt.Chart(
                source, width=280, height=230, title=f"Mean {z} for Wavelength Range"
            )
            .mark_point(filled=True, size=20, color="black")
            .encode(
                x=alt.X(
                    f"{xlabel}:Q",
                    scale=alt.Scale(
                        zero=False,
                        nice=False,
                        domain=[
                            np.min(source[xlabel])
                            - (0.02 * np.abs(np.min(source[xlabel]))),
                            1.02 * np.max(source[xlabel]),
                        ],
                    ),
                ),
                y=alt.Y(
                    f"mean({z}):Q",
                    scale=alt.Scale(zero=False, domain=domain),
                    title="Mean " + z,
                ),
            )
            .transform_filter(brush)
        )

        # display the interactive Altair plot
        (spectrum_int | lightcurve).display()
        if filename is not None:
            (spectrum_int | lightcurve).save(self._label_plot_file(filename))

Paint a 2D image of flux as a function of time and wavelength.

By using .pcolormesh, pixels can transform based on their edges, so non-uniform axes are allowed. This is a tiny bit slower than .imshow, but otherwise very similar.

Parameters#

ax : Axes, optional The axes into which to make this plot. quantity : str, optional The fluxlike quantity to imshow. (Must be a key of rainbow.fluxlike). w_unit : str, Unit, optional The unit for plotting wavelengths. t_unit : str, Unit, optional The unit for plotting times. colorbar : bool, optional Should we include a colorbar? mask_ok : bool, optional Should we mark which data are not OK? color_ok : str, optional The color to be used for masking data points that are not OK. alpha_ok : float, optional The transparency to be used for masking data points that are not OK. **kw : dict, optional All other keywords will be passed on to plt.pcolormesh, so you can have more detailed control over the plot appearance. Common keyword argumentsvli might include: [cmap, norm, alpha, vmin, vmax] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html

Source code in chromatic/rainbows/visualizations/pcolormesh.py
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def pcolormesh(
    self,
    ax=None,
    quantity="flux",
    xaxis="time",
    w_unit="micron",
    t_unit="day",
    colorbar=True,
    mask_ok=True,
    color_ok="tomato",
    alpha_ok=0.8,
    vmin=None,
    vmax=None,
    filename=None,
    **kw,
):
    """
    Paint a 2D image of flux as a function of time and wavelength.

    By using `.pcolormesh`, pixels can transform based on their edges,
    so non-uniform axes are allowed. This is a tiny bit slower than
    `.imshow`, but otherwise very similar.

    Parameters
    ----------
    ax : Axes, optional
        The axes into which to make this plot.
    quantity : str, optional
        The fluxlike quantity to imshow.
        (Must be a key of `rainbow.fluxlike`).
    w_unit : str, Unit, optional
        The unit for plotting wavelengths.
    t_unit : str, Unit, optional
        The unit for plotting times.
    colorbar : bool, optional
        Should we include a colorbar?
    mask_ok : bool, optional
        Should we mark which data are not OK?
    color_ok : str, optional
        The color to be used for masking data points that are not OK.
    alpha_ok : float, optional
        The transparency to be used for masking data points that are not OK.
    **kw : dict, optional
        All other keywords will be passed on to `plt.pcolormesh`,
        so you can have more detailed control over the plot
        appearance. Common keyword argumentsvli might include:
        `[cmap, norm, alpha, vmin, vmax]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html
    """

    # self.speak(f'imshowing')
    if ax is None:
        ax = plt.subplot()

    # get units
    w_unit, t_unit = u.Unit(w_unit), u.Unit(t_unit)

    # make sure some wavelength and time edges are defined
    self._make_sure_wavelength_edges_are_defined()
    self._make_sure_time_edges_are_defined()

    # set up the wavelength and time edges
    w_edges = leftright_to_edges(
        self.wavelength_lower.to_value(w_unit), self.wavelength_upper.to_value(w_unit)
    )
    t_edges = leftright_to_edges(
        self.time_lower.to_value(t_unit), self.time_upper.to_value(t_unit)
    )

    wlabel = f"{self._wave_label} ({w_unit.to_string('latex_inline')})"
    tlabel = f"{self._time_label} ({t_unit.to_string('latex_inline')})"

    def get_2D(k):
        """
        A small helper to get a 2D quantity. This is a bit of
        a kludge to help with weird cases of duplicate keys
        (for example where 'wavelength' might appear in both
        `wavelike` and `fluxlike`).
        """
        z = self.get(k)
        if np.shape(z) == self.shape:
            return z
        else:
            return self.fluxlike.get(k, None)

    if xaxis.lower()[0] == "t":
        x, y = t_edges, w_edges
        xlabel, ylabel = tlabel, wlabel
        z = get_2D(quantity)
        ok = get_2D("ok")
    elif xaxis.lower()[0] == "w":
        x, y = w_edges, t_edges
        xlabel, ylabel = wlabel, tlabel
        z = get_2D(quantity).T
        ok = get_2D("ok").T
    else:
        cheerfully_suggest(
            "Please specify either `xaxis='time'` or `xaxis='wavelength'` for `.plot()`"
        )

    # figure out a good shared color limits (unless already supplied)
    vmin = vmin or np.nanpercentile(remove_unit(z).flatten(), 1)
    vmax = vmax or np.nanpercentile(remove_unit(z).flatten(), 99)

    # define some default keywords
    pcolormesh_kw = dict(shading="flat", vmin=vmin, vmax=vmax)
    pcolormesh_kw.update(**kw)
    with quantity_support():
        plt.sca(ax)
        if mask_ok:
            okpcolormesh_kw = dict(**pcolormesh_kw)
            okpcolormesh_kw.update(
                cmap=one2another(
                    bottom=color_ok,
                    top=color_ok,
                    alpha_bottom=alpha_ok,
                    alpha_top=0,
                ),
                zorder=10,
                vmin=0,
                vmax=1,
            )
            plt.pcolormesh(
                remove_unit(x),
                remove_unit(y),
                remove_unit(ok),
                **okpcolormesh_kw,
            )
        plt.pcolormesh(
            remove_unit(x),
            remove_unit(y),
            remove_unit(z),
            **pcolormesh_kw,
        )
        plt.ylabel(ylabel)
        plt.xlabel(xlabel)
        if colorbar:
            plt.colorbar(
                ax=ax,
                label=u.Quantity(z).unit.to_string("latex_inline"),
            )
        # emulate origin = upper for imshow (y starts at top)
        plt.ylim(y[-1], y[0])
        plt.title(self.get("title"))

    if filename is not None:
        self.savefig(filename)
    return ax

Plot flux as sequence of offset light curves.

Parameters#

ax : Axes, optional The axes into which to make this plot. spacing : None, float, optional The spacing between light curves. (Might still change how this works.) None uses half the standard dev of entire flux data. w_unit : str, Unit, optional The unit for plotting wavelengths. t_unit : str, Unit, optional The unit for plotting times. cmap : str, Colormap, optional The color map to use for expressing wavelength. vmin : Quantity, optional The minimum value to use for the wavelength colormap. vmax : Quantity, optional The maximum value to use for the wavelength colormap. errorbar : boolean, optional Should we plot errorbars? text : boolean, optional Should we label each lightcurve? minimum_acceptable_ok : float The smallest value of ok that will still be included. (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data) plotkw : dict, optional A dictionary of keywords passed to plt.plot so you can have more detailed control over the plot appearance. Common keyword arguments might include: [alpha, clip_on, zorder, marker, markersize, linewidth, linestyle, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html errorbarkw : dict, optional A dictionary of keywords passed to plt.errorbar so you can have more detailed control over the plot appearance. Common keyword arguments might include: [alpha, elinewidth, color, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.errorbar.html textkw : dict, optional A dictionary of keywords passed to plt.text so you can have more detailed control over the text appearance. Common keyword arguments might include: [alpha, backgroundcolor, color, fontfamily, fontsize, fontstyle, fontweight, rotation, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html **kw : dict, optional Any additional keywords will be stored as kw. Nothing will happen with them.

Source code in chromatic/rainbows/visualizations/plot_lightcurves.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
def plot_lightcurves(
    self,
    quantity="flux",
    ax=None,
    spacing=None,
    w_unit="micron",
    t_unit="day",
    cmap=None,
    vmin=None,
    vmax=None,
    errorbar=True,
    text=True,
    minimum_acceptable_ok=0.8,
    plotkw={},
    errorbarkw={},
    textkw={},
    filename=None,
    scaling=1,
    label_scatter=False,
    **kw,
):
    """
    Plot flux as sequence of offset light curves.

    Parameters
    ----------
    ax : Axes, optional
        The axes into which to make this plot.
    spacing : None, float, optional
        The spacing between light curves.
        (Might still change how this works.)
        None uses half the standard dev of entire flux data.
    w_unit : str, Unit, optional
        The unit for plotting wavelengths.
    t_unit : str, Unit, optional
        The unit for plotting times.
    cmap : str, Colormap, optional
        The color map to use for expressing wavelength.
    vmin : Quantity, optional
        The minimum value to use for the wavelength colormap.
    vmax : Quantity, optional
        The maximum value to use for the wavelength colormap.
    errorbar : boolean, optional
        Should we plot errorbars?
    text : boolean, optional
        Should we label each lightcurve?
    minimum_acceptable_ok : float
        The smallest value of `ok` that will still be included.
        (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
    plotkw : dict, optional
        A dictionary of keywords passed to `plt.plot`
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[alpha, clip_on, zorder, marker, markersize,
          linewidth, linestyle, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html
    errorbarkw : dict, optional
        A dictionary of keywords passed to `plt.errorbar`
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[alpha, elinewidth, color, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.errorbar.html
    textkw : dict, optional
        A dictionary of keywords passed to `plt.text`
        so you can have more detailed control over the text
        appearance. Common keyword arguments might include:
        `[alpha, backgroundcolor, color, fontfamily, fontsize,
          fontstyle, fontweight, rotation, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
    **kw : dict, optional
        Any additional keywords will be stored as `kw`.
        Nothing will happen with them.
    """
    if len(kw) > 0:
        message = f"""
        You provided the keyword argument(s)
        {kw}
        but this function doesn't know how to
        use them. Sorry!
        """
        cheerfully_suggest(message)

    # make sure that the wavelength-based colormap is defined
    self._make_sure_cmap_is_defined(cmap=cmap, vmin=vmin, vmax=vmax)

    w_unit, t_unit = u.Unit(w_unit), u.Unit(t_unit)

    min_time = np.nanmin(self.time.to_value(t_unit))
    max_time = np.nanmax(self.time.to_value(t_unit))

    # make sure ax is set up
    if ax is None:
        fi = plt.figure(
            figsize=plt.matplotlib.rcParams["figure.figsize"][::-1],
            constrained_layout=True,
        )
        ax = plt.subplot()
    plt.sca(ax)

    # figure out the spacing to use
    if spacing is None:
        try:
            spacing = ax._most_recent_chromatic_plot_spacing
        except AttributeError:
            spacing = 3 * np.nanstd(self.get(quantity))
    ax._most_recent_chromatic_plot_spacing = spacing

    # TO-DO: check if this Rainbow has been normalized
    if self._is_probably_normalized() or "model" in self.fluxlike:
        label_y = "1 - (0.5 + i) * spacing"
        ylim = 1 - np.array([self.nwave + 1, -1]) * spacing
    else:
        label_y = "np.median(plot_y) - 0.5 * spacing"
        cheerfully_suggest(
            """
            It's not clear if/how this object has been normalized.
            Be aware that the baseline flux levels may therefore
            be a little bit funny in .plot()."""
        )
        ylim = None
    with quantity_support():

        if label_scatter:
            measured_rms = self.get_measured_scatter(quantity="residuals")
            expected_rms = self.get_expected_uncertainty()

        #  loop through wavelengths
        for i, w in enumerate(self.wavelength):

            # grab the quantity and yerr for this particular wavelength
            t, y, sigma = self.get_ok_data_for_wavelength(
                i, minimum_acceptable_ok=minimum_acceptable_ok, y=quantity
            )

            if np.any(np.isfinite(y)):

                plot_x = t.to_value(t_unit)

                # add an offset to this quantity
                plot_y = -i * spacing + (u.Quantity(y).value - 1) * scaling + 1
                plot_sigma = u.Quantity(sigma).value * scaling

                # get the color for this quantity
                color = self.get_wavelength_color(w)

                # plot the data points (with offsets)
                this_plotkw = dict(marker="o", linestyle="-", markersize=5, color=color)
                this_plotkw.update(**plotkw)

                # set default for error bar lines
                this_errorbarkw = dict(
                    color=color, linewidth=0, elinewidth=1, zorder=-1
                )
                this_errorbarkw.update(**errorbarkw)

                if errorbar:
                    plt.errorbar(
                        plot_x,
                        plot_y,
                        yerr=plot_sigma,
                        **this_errorbarkw,
                    )
                plt.plot(plot_x, plot_y, **this_plotkw)

                # add text labels next to each quantity plot
                this_textkw = dict(va="center", color=color)
                this_textkw.update(**textkw)
                if text:
                    plt.text(
                        min_time,
                        eval(label_y),
                        f"{w.to_value(w_unit):.2f} {w_unit.to_string('latex_inline')}",
                        **this_textkw,
                    )

                if label_scatter is not False:
                    this_textkw.update(ha="right")
                    measured = measured_rms[i]
                    expected = expected_rms[i]
                    cadence = self.dt
                    if text:
                        plt.text(
                            max_time,
                            eval(label_y),
                            eval(f'f"{label_scatter}"'),
                            **this_textkw,
                        )

        # add text labels to the plot
        plt.xlabel(f"{self._time_label} ({t_unit.to_string('latex_inline')})")
        plt.ylabel("Relative Flux (+ offsets)")
        if ylim is not None:
            if ylim[1] != ylim[0]:
                plt.ylim(*ylim)
        plt.title(self.get("title"))

    if filename is not None:
        self.savefig(filename)
    return ax

Plot flux as sequence of offset spectrum.

Parameters#

ax : Axes The axes into which to make this plot. spacing : None, float The spacing between light curves. (Might still change how this works.) None uses half the standard dev of entire flux data. w_unit : str, Unit The unit for plotting wavelengths. t_unit : str, Unit The unit for plotting times. cmap : str, Colormap The color map to use for expressing wavelength. vmin : Quantity The minimum value to use for the wavelength colormap. vmax : Quantity The maximum value to use for the wavelength colormap. errorbar : boolean Should we plot errorbars? text : boolean Should we label each spectrum? minimum_acceptable_ok : float The smallest value of ok that will still be included. (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data) scatterkw : dict A dictionary of keywords passed to plt.scatter so you can have more detailed control over the text appearance. Common keyword arguments might include: [alpha, color, s, m, edgecolor, facecolor] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html errorbarkw : dict A dictionary of keywords passed to plt.errorbar so you can have more detailed control over the plot appearance. Common keyword arguments might include: [alpha, elinewidth, color, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.errorbar.html plotkw : dict A dictionary of keywords passed to plt.plot so you can have more detailed control over the plot appearance. Common keyword arguments might include: [alpha, clip_on, zorder, marker, markersize, linewidth, linestyle, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html textkw : dict A dictionary of keywords passed to plt.text so you can have more detailed control over the text appearance. Common keyword arguments might include: [alpha, backgroundcolor, color, fontfamily, fontsize, fontstyle, fontweight, rotation, zorder] (and more) More details are available at https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html **kw : dict Any additional keywords will be stored as kw. Nothing will happen with them.

Source code in chromatic/rainbows/visualizations/plot_spectra.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
def plot_spectra(
    self,
    quantity="flux",
    ax=None,
    spacing=0.1,
    w_unit="micron",
    t_unit="day",
    cmap=None,
    vmin=None,
    vmax=None,
    errorbar=True,
    text=True,
    minimum_acceptable_ok=1,
    scatterkw={},
    errorbarkw={},
    plotkw={},
    textkw={},
    filename=None,
    **kw,
):
    """
    Plot flux as sequence of offset spectrum.

    Parameters
    ----------
    ax : Axes
        The axes into which to make this plot.
    spacing : None, float
        The spacing between light curves.
        (Might still change how this works.)
        None uses half the standard dev of entire flux data.
    w_unit : str, Unit
        The unit for plotting wavelengths.
    t_unit : str, Unit
        The unit for plotting times.
    cmap : str, Colormap
        The color map to use for expressing wavelength.
    vmin : Quantity
        The minimum value to use for the wavelength colormap.
    vmax : Quantity
        The maximum value to use for the wavelength colormap.
    errorbar : boolean
        Should we plot errorbars?
    text : boolean
        Should we label each spectrum?
    minimum_acceptable_ok : float
        The smallest value of `ok` that will still be included.
        (1 for perfect data, 1e-10 for everything but terrible data, 0 for all data)
    scatterkw : dict
        A dictionary of keywords passed to `plt.scatter`
        so you can have more detailed control over the text
        appearance. Common keyword arguments might include:
        `[alpha, color, s, m, edgecolor, facecolor]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
    errorbarkw : dict
        A dictionary of keywords passed to `plt.errorbar`
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[alpha, elinewidth, color, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.errorbar.html
    plotkw : dict
        A dictionary of keywords passed to `plt.plot`
        so you can have more detailed control over the plot
        appearance. Common keyword arguments might include:
        `[alpha, clip_on, zorder, marker, markersize,
          linewidth, linestyle, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html
    textkw : dict
        A dictionary of keywords passed to `plt.text`
        so you can have more detailed control over the text
        appearance. Common keyword arguments might include:
        `[alpha, backgroundcolor, color, fontfamily, fontsize,
          fontstyle, fontweight, rotation, zorder]` (and more)
        More details are available at
        https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html
    **kw : dict
        Any additional keywords will be stored as `kw`.
        Nothing will happen with them.
    """

    if len(kw) > 0:
        message = f"""
        You provided the keyword argument(s)
        {kw}
        but this function doesn't know how to
        use them. Sorry!
        """
        cheerfully_suggest(message)

    # make sure that the wavelength-based colormap is defined
    self._make_sure_cmap_is_defined(cmap=cmap, vmin=vmin, vmax=vmax)

    w_unit, t_unit = u.Unit(w_unit), u.Unit(t_unit)

    min_wave = np.nanmin(self.wavelength.to_value(w_unit))

    # make sure ax is set up
    if ax is None:
        fi = plt.figure(
            figsize=plt.matplotlib.rcParams["figure.figsize"][::-1],
            constrained_layout=True,
        )
        ax = plt.subplot()
    plt.sca(ax)

    # figure out the spacing to use
    if spacing is None:
        try:
            spacing = ax._most_recent_chromatic_plot_spacing
        except AttributeError:
            spacing = 3 * np.nanstd(self.get(quantity))
    ax._most_recent_chromatic_plot_spacing = spacing

    # TO-DO: check if this Rainbow has been normalized
    '''cheerfully_suggest(
        """
    It's not clear if/how this object has been normalized.
    Be aware that the baseline flux levels may therefore
    be a little bit funny in .plot()."""
    )'''
    with quantity_support():

        #  loop through times
        for i, t in enumerate(self.time):
            # grab the spectrum for this particular time
            w, y, sigma = self.get_ok_data_for_time(
                i, minimum_acceptable_ok=minimum_acceptable_ok, y=quantity
            )
            if np.any(np.isfinite(y)):

                plot_x = w.to_value(w_unit)

                # add an offset to this spectrum
                plot_y = -i * spacing + u.Quantity(y).value
                plot_sigma = u.Quantity(sigma).value

                default_color = "black"

                # set default for background line plot
                this_plotkw = dict(color=default_color, zorder=-1)
                this_plotkw.update(**plotkw)

                # set default for scatter plot with points
                this_scatterkw = dict(
                    marker="o",
                    linestyle="-",
                    c=plot_x,
                    cmap=self.cmap,
                    norm=self.norm,
                )
                this_scatterkw.update(**scatterkw)

                # set default for error bar lines
                this_errorbarkw = dict(
                    color=default_color, linewidth=0, elinewidth=1, zorder=-1
                )
                this_errorbarkw.update(**errorbarkw)

                if errorbar:
                    plt.errorbar(
                        plot_x,
                        plot_y,
                        yerr=plot_sigma,
                        **this_errorbarkw,
                    )
                plt.plot(plot_x, plot_y, **this_plotkw)
                plt.scatter(plot_x, plot_y, **this_scatterkw)

                # add text labels next to each spectrum
                this_textkw = dict(va="center", color=default_color)
                this_textkw.update(**textkw)
                if text:
                    plt.text(
                        min_wave,
                        np.median(plot_y) - 0.5 * spacing,
                        f"{t.to_value(t_unit):.2f} {t_unit.to_string('latex_inline')}",
                        **this_textkw,
                    )

        # add text labels to the plot
        plt.xlabel(f"Wavelength ({w_unit.to_string('latex_inline')})")
        plt.ylabel("Relative Flux (+ offsets)")
        if self.get("wscale") == "log":
            plt.xscale("log")
        plt.title(self.get("title"))
    if filename is not None:
        self.savefig(filename)
    return ax

Plot flux either as a sequence of offset lightcurves (default) or as a sequence of offset spectra.

Parameters#

xaxis : string What should be plotted on the x-axis of the plot? 'time' will plot a different light curve for each wavelength 'wavelength' will plot a different spectrum for each timepoint **kw : dict All other keywords will be passed along to either .plot_lightcurves or .plot_spectra as appropriate. Please see the docstrings for either of those functions to figure out what keyword arguments you might want to provide here.

Source code in chromatic/rainbows/visualizations/plot.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def plot(self, xaxis="time", **kw):
    """
    Plot flux either as a sequence of offset lightcurves (default)
    or as a sequence of offset spectra.

    Parameters
    ----------
    xaxis : string
        What should be plotted on the x-axis of the plot?
        'time' will plot a different light curve for each wavelength
        'wavelength' will plot a different spectrum for each timepoint
    **kw : dict
        All other keywords will be passed along to either
        `.plot_lightcurves` or `.plot_spectra` as appropriate.
        Please see the docstrings for either of those functions
        to figure out what keyword arguments you might want to
        provide here.
    """

    if xaxis.lower()[0] == "t":
        return self.plot_lightcurves(**kw)
    elif xaxis.lower()[0] == "w":
        return self.plot_spectra(**kw)
    else:
        cheerfully_suggest("Please specify either 'time' or 'wavelength' for `.plot()`")

🔨 Tools#

Calculate the surface flux from a thermally emitted surface, according to Planck function, in units of photons/(s * m**2 * nm).

Parameters#

temperature : Quantity The temperature of the thermal emitter, with units of K. wavelength : Quantity, optional The wavelengths at which to calculate, with units of wavelength. R : float, optional The spectroscopic resolution for creating a log-uniform grid that spans the limits set by wlim, only if wavelength is not defined. wlim : Quantity, optional The two-element [lower, upper] limits of a wavelength grid that would be populated with resolution R, only if wavelength is not defined. **kw : dict, optional Other keyword arguments will be ignored.

Returns#

photons : Quantity The surface flux in photon units

This evaluates the Planck function at the exact wavelength values; it doesn't do anything fancy to integrate over binwidths, so if you're using very wide (R~a few) bins your integrated fluxes will be messed up.

Source code in chromatic/spectra/planck.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
def get_planck_photons(
    temperature=3000, wavelength=None, R=100, wlim=[0.04, 6] * u.micron, **kw
):
    """
    Calculate the surface flux from a thermally emitted surface,
    according to Planck function, in units of photons/(s * m**2 * nm).

    Parameters
    ----------
    temperature : Quantity
        The temperature of the thermal emitter,
        with units of K.
    wavelength : Quantity, optional
        The wavelengths at which to calculate,
        with units of wavelength.
    R : float, optional
        The spectroscopic resolution for creating a log-uniform
        grid that spans the limits set by `wlim`, only if
        `wavelength` is not defined.
    wlim : Quantity, optional
        The two-element [lower, upper] limits of a wavelength
        grid that would be populated with resolution `R`, only if
        `wavelength` is not defined.
    **kw : dict, optional
        Other keyword arguments will be ignored.

    Returns
    -------
    photons : Quantity
        The surface flux in photon units

    This evaluates the Planck function at the exact
    wavelength values; it doesn't do anything fancy to integrate
    over binwidths, so if you're using very wide (R~a few) bins
    your integrated fluxes will be messed up.

    """

    # make sure the temperature unit is good (whether or not it's supplied)
    temperature_unit = u.Quantity(temperature).unit
    if temperature_unit == u.K:
        temperature_with_unit = temperature
    elif temperature_unit == u.Unit(""):
        temperature_with_unit = temperature * u.K

    # create a wavelength grid if one isn't supplied
    if wavelength is None:
        wavelength_unit = wlim.unit
        wavelength = (
            np.exp(np.arange(np.log(wlim[0].value), np.log(wlim[1].value), 1 / R))
            * wavelength_unit
        )

    energy = calculate_planck_flux(
        wavelength=wavelength, temperature=temperature_with_unit
    )
    photon_energy = con.h * con.c / wavelength / u.ph

    return wavelength, (energy / photon_energy).to(u.ph / (u.s * u.m**2 * u.nm))

Get a PHOENIX model spectrum for an arbitrary temperature, logg, metallicity.

Calculate the surface flux from a thermally emitted surface, according to PHOENIX model spectra, in units of photons/(s * m**2 * nm).

Parameters#

temperature : float, optional Temperature, in K (with no astropy units attached). logg : float, optional Surface gravity log10[g/(cm/s**2)] (with no astropy units attached). metallicity : float, optional Metallicity log10[metals/solar] (with no astropy units attached). R : float, optional Spectroscopic resolution (lambda/dlambda). Currently, this must be in one of [3,10,30,100,300,1000,3000,10000,30000,100000], but check back soon for custom wavelength grids. There is extra overhead associated with switching resolutions, so if you're going to retrieve many spectra, try to group by resolution. (If you're using the wavelength or wavelength_edges option below, please be ensure your requested R exceeds that needed to support your wavelengths.) wavelength : Quantity, optional A grid of wavelengths on which you would like your spectrum. If this is None, the complete wavelength array will be returned at your desired resolution. Otherwise, the spectrum will be returned exactly at those wavelengths. Grid points will be cached for this new wavelength grid to speed up applications that need to retreive lots of similar spectra for the same wavelength (like many optimization or sampling problems). wavelength_edges : Quantity, optional Same as wavelength (see above!) but defining the wavelength grid by its edges instead of its centers. The returned spectrum will have 1 fewer element than wavelength_edges.

Returns#

wavelength : Quantity The wavelengths, at the specified resolution. photons : Quantity The surface flux in photon units

Source code in chromatic/spectra/phoenix.py
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
def get_phoenix_photons(
    temperature=5780,
    logg=4.43,
    metallicity=0.0,
    R=100,
    wavelength=None,
    wavelength_edges=None,
    visualize=False,
):
    """
    Get a PHOENIX model spectrum for an arbitrary temperature, logg, metallicity.

    Calculate the surface flux from a thermally emitted surface,
    according to PHOENIX model spectra, in units of photons/(s * m**2 * nm).

    Parameters
    ----------
    temperature : float, optional
        Temperature, in K (with no astropy units attached).
    logg : float, optional
        Surface gravity log10[g/(cm/s**2)] (with no astropy units attached).
    metallicity : float, optional
        Metallicity log10[metals/solar] (with no astropy units attached).
    R : float, optional
        Spectroscopic resolution (lambda/dlambda). Currently, this must
        be in one of [3,10,30,100,300,1000,3000,10000,30000,100000], but
        check back soon for custom wavelength grids. There is extra
        overhead associated with switching resolutions, so if you're
        going to retrieve many spectra, try to group by resolution.
        (If you're using the `wavelength` or `wavelength_edges` option
        below, please be ensure your requested R exceeds that needed
        to support your wavelengths.)
    wavelength : Quantity, optional
        A grid of wavelengths on which you would like your spectrum.
        If this is None, the complete wavelength array will be returned
        at your desired resolution. Otherwise, the spectrum will be
        returned exactly at those wavelengths. Grid points will be
        cached for this new wavelength grid to speed up applications
        that need to retreive lots of similar spectra for the same
        wavelength (like many optimization or sampling problems).
    wavelength_edges : Quantity, optional
        Same as `wavelength` (see above!) but defining the wavelength
        grid by its edges instead of its centers. The returned spectrum
        will have 1 fewer element than `wavelength_edges`.

    Returns
    -------
    wavelength : Quantity
        The wavelengths, at the specified resolution.
    photons : Quantity
        The surface flux in photon units
    """
    return phoenix_library.get_spectrum(
        temperature=temperature,
        logg=logg,
        metallicity=metallicity,
        R=R,
        wavelength=wavelength,
        wavelength_edges=wavelength_edges,
        visualize=visualize,
    )

Tools for resampling array from one grid of independent variables to another.

bintoR(x, y, unc=None, R=50, xlim=None, weighting='inversevariance', drop_nans=True) #

Bin any x and y array onto a logarithmicly uniform grid.

Parameters#

x : array The original independent variable. (For a spectrum example = wavelength) y : array The original dependent variable (same size as x). (For a spectrum example = flux) unc : array, None, optional The unceratinty on the dependent variable (For a spectrum example = the flux uncertainty) R : array, optional The spectral resolution R=x/dx for creating a new, logarithmically uniform grid that starts at the first value of x. xlim : list, array, optional A two-element list indicating the min and max values of x for the new logarithmically spaced grid. If None, these limits will be created from the data themselves weighting : str, optional How should we weight values when averaging them together into one larger bin? weighting = 'inversevariance' weights = 1/unc**2 weighting = {literally anything else} uniform weights This will have no impact if unc == None, or for any new bins that effectively overlap less than one original unbinned point. drop_nans : bool, optional Should we skip any bins turn out to be nans? This most often happens when bins are empty.

Returns#

result : dict A dictionary containing at least... x = the center of the output grid y = the resampled value on the output grid x_edge_lower = the lower edges of the output grid x_edge_upper = the upper edges of the output grid ...and possibly also uncertainty = the calculated uncertainty per bin

Source code in chromatic/resampling.py
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def bintoR(
    x, y, unc=None, R=50, xlim=None, weighting="inversevariance", drop_nans=True
):
    """
    Bin any x and y array onto a logarithmicly uniform grid.

    Parameters
    ----------
    x : array
        The original independent variable.
        (For a spectrum example = wavelength)
    y : array
        The original dependent variable (same size as x).
        (For a spectrum example = flux)
    unc : array, None, optional
        The unceratinty on the dependent variable
        (For a spectrum example = the flux uncertainty)
    R : array, optional
        The spectral resolution R=x/dx for creating a new,
        logarithmically uniform grid that starts at the first
        value of x.
    xlim : list, array, optional
        A two-element list indicating the min and max values of
        x for the new logarithmically spaced grid. If None,
        these limits will be created from the data themselves
    weighting : str, optional
        How should we weight values when averaging
        them together into one larger bin?
        `weighting = 'inversevariance'`
            weights = 1/unc**2
         `weighting = {literally anything else}`
            uniform weights
        This will have no impact if `unc == None`, or for any
        new bins that effectively overlap less than one original
        unbinned point.
    drop_nans : bool, optional
        Should we skip any bins turn out to be nans?
        This most often happens when bins are empty.

    Returns
    -------
    result : dict
        A dictionary containing at least...
            `x` = the center of the output grid
            `y` = the resampled value on the output grid
            `x_edge_lower` = the lower edges of the output grid
            `x_edge_upper` = the upper edges of the output grid
        ...and possibly also
            `uncertainty` = the calculated uncertainty per bin
    """

    try:
        x_unit = x.unit
        x_without_unit = x.value
    except AttributeError:
        x_unit = 1
        x_without_unit = x

    # create a new grid of x at the given resolution
    lnx = np.log(x_without_unit)
    dnewlnx = 1.0 / R

    # set the limits of the new xgrid (in log space)
    if xlim is None:
        # use the input grid to set the limits
        lnxbottom, lnxtop = np.nanmin(lnx), np.nanmax(lnx)
    else:
        # use the custom xlim to set the limits
        lnxbottom, lnxtop = xlim

    # create a new, log-uniform grid of x values
    newlnx = np.arange(lnxbottom, lnxtop + dnewlnx, dnewlnx)

    # now do the binning on a uniform grid of lnx
    result = bintogrid(
        lnx, y, unc, newx=newlnx, weighting=weighting, drop_nans=drop_nans
    )

    # convert back from log to real values
    for k in ["x", "x_edge_lower", "x_edge_upper"]:
        result[k] = np.exp(result[k]) * x_unit

    return result

bintogrid(x=None, y=None, unc=None, newx=None, newx_edges=None, dx=None, nx=None, weighting='inversevariance', drop_nans=True, x_edges=None, visualize=False) #

Bin any x and y array onto a linearly uniform grid.

Parameters#

x : array The original independent variable. (For a spectrum example = wavelength) y : array The original dependent variable (same size as x). (For a spectrum example = flux) unc : array, None The unceratinty on the dependent variable (For a spectrum example = the flux uncertainty) nx : array The number of bins from the original grid to bin together into the new one. dx : array The fixed spacing for creating a new, linearly uniform grid that start at the first value of x. This will be ignored if newx != None. newx : array A new custom grid onto which we should bin. newx_edges : array The edges of the new grid of bins for the independent variable, onto which you want to resample the y values. The left and right edges of the bins will be, respectively, newx_edges[:-1] and newx_edges[1:], so the size of the output array will be len(newx_edges) - 1 weighting : str How should we weight values when averaging them together into one larger bin? weighting = 'inversevariance' weights = 1/unc**2 weighting = {literally anything else} uniform weights This will have no impact if unc == None, or for any new bins that effectively overlap less than one original unbinned point. drop_nans : bool Should we skip any bins turn out to be nans? This most often happens when bins are empty. x_edges : array The edges of the original independent variable bins. The left and right edges of the bins are interpreted to be x_edges[:-1] and x_edges[1:], respectively, so the associated y should have exactly 1 fewer element than x_edges. This provides finer control over the size of each bin in the input than simply supplying x(still a little experimental)

Returns#

result : dict A dictionary containing at least... x = the center of the output grid y = the resampled value on the output grid x_edge_lower = the lower edges of the output grid x_edge_upper = the upper edges of the output grid ...and possibly also uncertainty = the calculated uncertainty per bin

The order of precendence for setting the new grid is [newx_edges, newx, dx, nx] The first will be used, and others will be ignored.

Source code in chromatic/resampling.py
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
def bintogrid(
    x=None,
    y=None,
    unc=None,
    newx=None,
    newx_edges=None,
    dx=None,
    nx=None,
    weighting="inversevariance",
    drop_nans=True,
    x_edges=None,
    visualize=False,
):
    """
    Bin any x and y array onto a linearly uniform grid.

    Parameters
    ----------
    x : array
        The original independent variable.
        (For a spectrum example = wavelength)
    y : array
        The original dependent variable (same size as x).
        (For a spectrum example = flux)
    unc : array, None
        The unceratinty on the dependent variable
        (For a spectrum example = the flux uncertainty)
    nx : array
        The number of bins from the original grid to
        bin together into the new one.
    dx : array
        The fixed spacing for creating a new, linearly uniform
        grid that start at the first value of x. This will
        be ignored if `newx` != None.
    newx : array
        A new custom grid onto which we should bin.
    newx_edges : array
        The edges of the new grid of bins for the independent
        variable, onto which you want to resample the y
        values. The left and right edges of the bins will be,
        respectively, `newx_edges[:-1]` and `newx_edges[1:]`,
        so the size of the output array will be
        `len(newx_edges) - 1`
    weighting : str
        How should we weight values when averaging
        them together into one larger bin?
        `weighting = 'inversevariance'`
            weights = 1/unc**2
         `weighting = {literally anything else}`
            uniform weights
        This will have no impact if `unc == None`, or for any
        new bins that effectively overlap less than one original
        unbinned point.
    drop_nans : bool
        Should we skip any bins turn out to be nans?
        This most often happens when bins are empty.
    x_edges : array
        The edges of the original independent variable bins.
        The left and right edges of the bins are interpreted
        to be `x_edges[:-1]` and `x_edges[1:]`,
        respectively, so the associated `y` should have exactly
        1 fewer element than `x_edges`. This provides finer
        control over the size of each bin in the input than
        simply supplying `x`(still a little experimental)

    Returns
    -------
    result : dict
        A dictionary containing at least...
            `x` = the center of the output grid
            `y` = the resampled value on the output grid
            `x_edge_lower` = the lower edges of the output grid
            `x_edge_upper` = the upper edges of the output grid
        ...and possibly also
            `uncertainty` = the calculated uncertainty per bin


    The order of precendence for setting the new grid is
    [`newx_edges`, `newx`, `dx`, `nx`]
    The first will be used, and others will be ignored.
    """

    # check that an OK set of inputs has been supplied
    if (x is not None) and (x_edges is not None):
        raise RuntimeError(
            """🌈 Both `x` and `x_edges` were supplied to `bintogrid`. Confusing!"""
        )
    if (x is None) and (x_edges is None):
        raise RuntimeError(
            """🌈 At least one of `x` or `x_edges` must be supplied to `bintogrid`."""
        )
    if y is None:
        raise RuntimeError("""🌈 `y` must be supplied to `bintogrid`.""")

    # make sure the edges and the centers are set
    if x is None:
        x_left, x_right = edges_to_leftright(x_edges)
        x = 0.5 * (left + right)
    else:
        x_left, x_right = calculate_bin_leftright(x)
        x_edges = leftright_to_edges(x_left, x_right)
    try:
        x_unit = x.unit
        x_without_unit = x.value
    except AttributeError:
        x_unit = 1
        x_without_unit = x

    try:
        y_unit = y.unit
        y_without_unit = y.value
    except AttributeError:
        y_unit = 1
        y_without_unit = y

    # warn if multiple inputs are provided
    number_of_grid_options = np.sum([z is not None for z in [newx_edges, newx, dx, nx]])
    if number_of_grid_options > 1:
        cheerfully_suggest(
            """More than one output grid sent to `bintogrid`.
                         The one being used is the first to appear in
                         [`newx_edges`, `newx`, `dx`, `nx`]
                         but you might want to choose more carefully."""
        )

    # define inputs based on the following order
    if newx_edges is not None:
        # define grid by its edges (and define others from there)
        newx_edges_without_unit = u.Quantity(newx_edges).to(x_unit).value
        dx_without_unit = np.diff(newx_edges_without_unit)
        newx_without_unit = newx_edges_without_unit[:-1] + 0.5 * dx_without_unit
        newx_left_without_unit = newx_edges_without_unit[:-1]
        newx_right_without_unit = newx_edges_without_unit[1:]

        # make sure the final output grid is defined
        final_newx, final_newx_left, final_newx_right = (
            newx_without_unit * x_unit,
            newx_left_without_unit * x_unit,
            newx_right_without_unit * x_unit,
        )
    elif newx is not None:
        # define grid by its centers (and define others from there)
        newx_without_unit = u.Quantity(newx).to(x_unit).value
        newx_left_without_unit, newx_right_without_unit = calculate_bin_leftright(
            newx_without_unit
        )
        newx_edges_without_unit = np.hstack(
            [newx_left_without_unit, newx_right_without_unit[-1]]
        )
        dx_without_unit = np.diff(newx_edges_without_unit)

        # make sure the final output grid is defined
        final_newx, final_newx_left, final_newx_right = (
            newx_without_unit * x_unit,
            newx_left_without_unit * x_unit,
            newx_right_without_unit * x_unit,
        )
    elif dx is not None:
        # define grid by a bin width (and define others from there)
        dx_without_unit = u.Quantity(dx).to(x_unit).value
        newx_without_unit = np.arange(
            np.nanmin(x_without_unit),
            np.nanmax(x_without_unit) + dx_without_unit,
            dx_without_unit,
        )
        newx_left_without_unit, newx_right_without_unit = calculate_bin_leftright(
            newx_without_unit
        )
        newx_edges_without_unit = np.hstack(
            [newx_left_without_unit, newx_right_without_unit[-1]]
        )

        # make sure the final output grid is defined
        final_newx, final_newx_left, final_newx_right = (
            newx_without_unit * x_unit,
            newx_left_without_unit * x_unit,
            newx_right_without_unit * x_unit,
        )

    elif nx is not None:
        # keep track of the original input x values
        original_x_without_unit = x_without_unit

        # redefine the input x to indices, to do interpolation in index space
        x_without_unit = np.arange(0, len(x_without_unit))

        # define a grid of edges that will enclose the right number of indices
        x_left_i, x_right_i = calculate_bin_leftright(x_without_unit)
        newx_edges_without_unit = leftright_to_edges(x_left_i, x_right_i)[::nx]
        newx_without_unit = 0.5 * (
            newx_edges_without_unit[1:] + newx_edges_without_unit[:-1]
        )

        # calculate the actual x values corresponding to the bins
        original_edges = leftright_to_edges(
            *calculate_bin_leftright(original_x_without_unit)
        )
        final_edges = original_edges[::nx] * x_unit
        final_newx_left, final_newx_right = edges_to_leftright(final_edges)
        final_newx = 0.5 * (final_newx_left + final_newx_right)
        dx_without_unit = (final_newx_right - final_newx_left) / x_unit
    else:
        raise RuntimeError(
            """No output grid sent to `bintogrid`.
                              Please choose one of the following:
                              [`newx_edges`, `newx`, `dx`, `nx`]"""
        )

    # don't complain about zero-divisions in here (to allow infinite uncertainties)
    with np.errstate(divide="ignore", invalid="ignore"):

        # calculate weight integrals for the bin array
        ok = np.isnan(y_without_unit) == False

        # resample the sums onto that new grid
        if unc is None:
            weights = np.ones_like(x_without_unit)
        else:
            if weighting == "inversevariance":
                weights = 1 / unc**2
            else:
                weights = np.ones_like(x_without_unit)

            # ignore infinite weights (= 0 uncertainties)
            ok *= np.isfinite(weights)

        if np.any(ok):
            numerator = resample_while_conserving_flux(
                xin=x_without_unit[ok],
                yin=(y_without_unit * weights)[ok],
                xout_edges=newx_edges_without_unit,
            )
            denominator = resample_while_conserving_flux(
                xin=x_without_unit[ok],
                yin=weights[ok],
                xout_edges=newx_edges_without_unit,
            )

            # the binned weighted means on the new grid
            newy = numerator["y"] / denominator["y"]

            # the standard error on the means, for those bins
            newunc = np.sqrt(1 / denominator["y"])

            # keep track of the number of original bins going into each new bin
            number_of_original_bins_per_new_bin = resample_while_conserving_flux(
                xin=x_without_unit[ok],
                yin=np.ones_like(y_without_unit)[ok],
                xout_edges=newx_edges_without_unit,
            )["y"]
        else:
            newy = np.nan * newx_without_unit
            newunc = np.nan * newx_without_unit
            number_of_original_bins_per_new_bin = np.zeros_like(newx_without_unit)

    # remove any empty bins
    if drop_nans:
        ok = np.isfinite(newy)
    else:
        ok = np.ones_like(newx_without_unit).astype(bool)

    # if no uncertainties were given, don't return uncertainties
    result = {}

    # populate the new grid centers + edges + values
    result["x"] = final_newx[ok]
    result["x_edge_lower"] = final_newx_left[ok]
    result["x_edge_upper"] = final_newx_right[ok]

    # populate the new grid values
    result["y"] = newy[ok] * y_unit

    # populate the new grid value uncertainties
    if unc is not None:
        result["uncertainty"] = newunc[ok] * y_unit

    # store how many of the original pixels made it into this new one
    result["N_unbinned/N_binned"] = number_of_original_bins_per_new_bin[ok]
    if visualize:
        fi, ax = plt.subplots(
            2, 1, figsize=(8, 4), dpi=300, gridspec_kw=dict(height_ratios=[1, 0.2])
        )
        plt.sca(ax[0])
        plot_as_boxes(x, y, xleft=x_left, xright=x_right, color="silver", linewidth=1)
        ekw = dict(elinewidth=1, linewidth=0)
        plt.errorbar(x, y, yerr=unc, color="silver", marker="s", **ekw)
        plt.errorbar(
            result["x"],
            result["y"],
            yerr=result.get("uncertainty", None),
            xerr=0.5 * (result["x_edge_upper"] - result["x_edge_lower"]) * x_unit,
            marker="o",
            color="black",
            zorder=100,
            **ekw,
        )
        plt.sca(ax[1])
        plot_as_boxes(
            result["x"],
            result["N_unbinned/N_binned"],
            xleft=result["x_edge_lower"],
            xright=result["x_edge_upper"],
        )
        plt.ylabel("$N_{unbinned}/N_{binned}$")
        plt.ylim(0, None)

    return result

calculate_bin_leftright(x) #

If x is an array of bin centers, calculate the bin edges. (assumes outermost bins are same size as their neighbors)

Parameters#

x : array The array of bin centers.

Returns#

l : array The left edges of the bins. r : array The right edges of the bins.

Source code in chromatic/resampling.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def calculate_bin_leftright(x):
    """
    If x is an array of bin centers, calculate the bin edges.
    (assumes outermost bins are same size as their neighbors)

    Parameters
    ----------
    x : array
        The array of bin centers.

    Returns
    ----------
    l : array
        The left edges of the bins.
    r : array
        The right edges of the bins.
    """

    # what are bin edges (making a guess for those on the ends)
    # xbinsize = calculate_bin_widths(x)
    # left = x - xbinsize / 2.0
    # right = x + xbinsize / 2.0

    # weird corner case!
    if len(x) == 1:
        left, right = np.sort([0, 2 * x[0]])
        return np.array([left]), np.array([right])

    inner_edges = 0.5 * np.diff(x) + x[:-1]
    first_edge = x[0] - (inner_edges[0] - x[0])
    last_edge = x[-1] + (x[-1] - inner_edges[-1])

    left = np.hstack([first_edge, inner_edges])
    right = np.hstack([inner_edges, last_edge])

    return left, right

calculate_bin_widths(x) #

If x is an array of bin centers, calculate the bin sizes. (assumes outermost bins are same size as their neighbors)

Parameters#

x : array The array of bin centers.

Returns#

s : array The array of bin sizes (total size, from left to right).

Source code in chromatic/resampling.py
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
def calculate_bin_widths(x):
    """
    If x is an array of bin centers, calculate the bin sizes.
    (assumes outermost bins are same size as their neighbors)

    Parameters
    ----------
    x : array
        The array of bin centers.

    Returns
    ----------
    s : array
        The array of bin sizes (total size, from left to right).
    """

    # OLD VERSION
    # binsize = np.zeros_like(x)
    # binsize[0:-1] = x[1:] - x[0:-1]
    # binsize[-1] = binsize[-2]
    left, right = calculate_bin_leftright(x)
    binsize = right - left
    return binsize

edges_to_leftright(edges) #

Convert N+1 contiguous edges to two arrays of N left/right edges.

Source code in chromatic/resampling.py
110
111
112
113
114
115
def edges_to_leftright(edges):
    """
    Convert N+1 contiguous edges to two arrays of N left/right edges.
    """
    left, right = edges[:-1], edges[1:]
    return left, right

leftright_to_edges(left, right) #

Convert two arrays of N left/right edges to N+1 continugous edges.

Source code in chromatic/resampling.py
118
119
120
121
122
123
def leftright_to_edges(left, right):
    """
    Convert two arrays of N left/right edges to N+1 continugous edges.
    """
    edges = np.hstack([left, right[-1]])
    return edges

plot_as_boxes(x, y, xleft=None, xright=None, **kwargs) #

Plot with boxes, to show the left and right edges of a box. This is useful, or example, to plot flux associated with pixels, in case you are trying to do a sub-pixel resample or interpolation or shift.

Parameters#

x : array The original independent variable. y : array The original dependent variable (same size as x). **kwargs : dict All additional keywords will be passed to plt.plot

Source code in chromatic/resampling.py
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
def plot_as_boxes(x, y, xleft=None, xright=None, **kwargs):
    """
    Plot with boxes, to show the left and right edges of a box.
    This is useful, or example, to plot flux associated with
    pixels, in case you are trying to do a sub-pixel resample
    or interpolation or shift.

    Parameters
    ----------
    x : array
        The original independent variable.
    y : array
        The original dependent variable (same size as x).
    **kwargs : dict
        All additional keywords will be passed to plt.plot
    """

    # what are bin edges (making a guess for those on the ends)
    if (xleft is None) and (xright is None):
        xleft, xright = calculate_bin_leftright(x)

    # create a array doubling up the y values and interleaving the edges
    plot_x = np.vstack((xleft, xright)).reshape((-1,), order="F")
    plot_y = np.vstack((y, y)).reshape((-1,), order="F")

    # plot those constructed arrays
    plt.plot(plot_x, plot_y, **kwargs)

resample_while_conserving_flux(xin=None, yin=None, xout=None, xin_edges=None, xout_edges=None, replace_nans=0.0, visualize=False, pause=False) #

Starting from some initial x and y, resample onto a different grid (either higher or lower resolution), while conserving total flux.

When including the entire range of xin, sum(yout) == sum(yin) should be true.

When including only part of the range of xin, the integral between any two points should be conserved.

Parameters#

xin : array The original independent variable. yin : array The original dependent variable (same size as x). xout : array The new grid of independent variables onto which you want to resample the y values. Refers to the center of each bin (use xout_edges for finer control over the exact edges of the bins) xin_edges : array The edges of the original independent variable bins. The left and right edges of the bins are interpreted to be xin_edges[:-1] and xin_edges[1:], respectively, so the associated yin should have exactly 1 fewer element than xin_edges. This provides finer control over the size of each bin in the input than simply supplying xin(still a little experimental) They should probably be sorted? xout_edges : array The edges of the new grid of bins for the independent variable, onto which you want to resample the y values. The left and right edges of the bins will be, respectively, xout_edges[:-1] and xout_edges[1:], so the size of the output array will be len(xout_edges) - 1 replace_nans : float, str Replace nan values with this value. replace_nans = 0 will add no flux where nans are replace_nans = nan will ensure you get nans returned everywhere if you try to resample over any nan replace_nans = 'interpolate' will try to replace nans by linearly interpolating from nearby values (not yet implemented) visualize : bool Should we make a plot showing whether it worked? pause : bool Should we pause to wait for a key press?

Returns#

result : dict A dictionary containing... x = the center of the output grid y = the resampled value on the output grid edges = the edges of the output grid, which will have one more element than x or y

Source code in chromatic/resampling.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
def resample_while_conserving_flux(
    xin=None,
    yin=None,
    xout=None,
    xin_edges=None,
    xout_edges=None,
    replace_nans=0.0,
    visualize=False,
    pause=False,
):
    """
    Starting from some initial x and y, resample onto a
    different grid (either higher or lower resolution),
    while conserving total flux.

    When including the entire range of `xin`,
    `sum(yout) == sum(yin)` should be true.

    When including only part of the range of `xin`,
    the integral between any two points should be conserved.

    Parameters
    ----------
    xin : array
        The original independent variable.
    yin : array
        The original dependent variable (same size as x).
    xout : array
        The new grid of independent variables onto which
        you want to resample the y values. Refers to the
        center of each bin (use `xout_edges` for finer
        control over the exact edges of the bins)
    xin_edges : array
        The edges of the original independent variable bins.
        The left and right edges of the bins are interpreted
        to be `xin_edges[:-1]` and `xin_edges[1:]`,
        respectively, so the associated `yin` should have exactly
        1 fewer element than `xin_edges`. This provides finer
        control over the size of each bin in the input than
        simply supplying `xin`(still a little experimental)
        They should probably be sorted?
    xout_edges : array
        The edges of the new grid of bins for the independent
        variable, onto which you want to resample the y
        values. The left and right edges of the bins will be,
        respectively, `xout_edges[:-1]` and `xout_edges[1:]`,
        so the size of the output array will be
        `len(xout_edges) - 1`
    replace_nans : float, str
        Replace nan values with this value.
        `replace_nans = 0`
            will add no flux where nans are
        `replace_nans = nan`
            will ensure you get nans returned everywhere
            if you try to resample over any nan
        `replace_nans = 'interpolate'`
            will try to replace nans by linearly interpolating
            from nearby values (not yet implemented)
    visualize : bool
        Should we make a plot showing whether it worked?
    pause : bool
        Should we pause to wait for a key press?

    Returns
    -------
    result : dict
        A dictionary containing...
            `x` = the center of the output grid
            `y` = the resampled value on the output grid
            `edges` = the edges of the output grid, which will
                have one more element than x or y
    """

    # make sure there are some reasonable input options
    assert (xin is not None) or (xin_edges is not None)
    assert yin is not None
    assert (xout is not None) or (xout_edges is not None)

    # set up the bins, to calculate cumulative distribution of y
    if xin_edges is None:
        # make sure the sizes match up
        assert len(xin) == len(yin)
        # sort to make sure x is strictly increasing
        s = np.argsort(xin)
        xin_sorted = xin[s]
        yin_sorted = yin[s]
        # estimate some bin edges (might fail for non-uniform grids)
        xin_left, xin_right = calculate_bin_leftright(xin_sorted)
        # define an array of edges
        xin_edges = leftright_to_edges(xin_left, xin_right)
    else:
        # make sure the sizes match up
        assert len(xin_edges) == (len(yin) + 1)
        # sort to make sure x is strictly increasing
        s = np.argsort(xin_edges)
        xin_left, xin_right = edges_to_leftright(xin_edges[s])
        xin_sorted = (xin_left + xin_right) / 2
        yin_sorted = yin[s[:-1]]

    # the first element should be the left edge of the first pixel
    # last element will be right edge of last pixel
    xin_for_cdf = xin_edges

    # to the left of the first pixel, assume flux is zero
    yin_for_cdf = np.hstack([0, yin_sorted])

    # correct for any non-finite values
    bad = np.isnan(yin_for_cdf)
    if replace_nans == "interpolate":
        raise NotImplementedError(
            "The `replace_nans='interpolate'`` option doens't exist yet!"
        )
    yin_for_cdf[bad] = replace_nans

    # calculate the CDF of the flux (at pixel edge locations)
    cdfin = np.cumsum(yin_for_cdf)

    # create an interpolator for that CDF
    cdfinterpolator = interp1d(
        xin_for_cdf,
        cdfin,
        kind="linear",
        bounds_error=False,
        fill_value=(0.0, np.sum(yin)),
    )

    # calculate bin edges (of size len(xout)+1)
    if xout_edges is None:
        xout_left, xout_right = calculate_bin_leftright(xout)
        xout_edges = leftright_to_edges(xout_left, xout_right)
    else:
        xout_left, xout_right = edges_to_leftright(xout_edges)
        xout = (xout_left + xout_right) / 2

    xout_for_cdf = leftright_to_edges(xout_left, xout_right)

    # interpolate the CDF onto those bin edges
    cdfout = cdfinterpolator(xout_for_cdf)

    # take  derivative of the CDF to get flux per resampled bin
    # (xout is bin center, and yout is the flux in that bin)
    yout = np.diff(cdfout)

    if visualize:
        fi, (ax_cdf, ax_pdf) = plt.subplots(2, 1, sharex=True, dpi=300, figsize=(8, 8))
        inkw = dict(
            color="black",
            alpha=1,
            linewidth=3,
            marker=".",
            markeredgecolor="none",
        )
        outkw = dict(
            color="darkorange",
            alpha=1,
            linewidth=1,
            marker=".",
            markersize=8,
            markeredgecolor="none",
        )

        legkw = dict(
            frameon=False,
            loc="upper left",
        )

        xinbinsize = xin_right - xin_left
        xoutbinsize = xout_right - xout_left
        # plot the PDFs
        plt.sca(ax_pdf)
        plt.ylabel("Flux per (Original) Pixel")
        plt.xlabel("Pixel")
        # plot the original pixels (in df/dpixel to compare with resampled)
        plot_as_boxes(
            xin_sorted, yin_sorted / xinbinsize, label="Original Pixels", **inkw
        )

        # what would a bad interpolation look like?
        interpolate_badly = interp1d(
            xin_sorted,
            yin_sorted / xinbinsize,
            kind="linear",
            bounds_error=False,
            fill_value=0.0,
        )
        plt.plot(
            xout,
            interpolate_badly(xout),
            color="cornflowerblue",
            alpha=1,
            linewidth=1,
            marker=".",
            markersize=8,
            markeredgecolor="none",
            label="Silly Simple Interpolation",
        )

        # plot the flux-conserving resampled data (again, in df/d"pixel")
        plt.plot(
            xout, yout / xoutbinsize, label="Flux-Conserving Interpolation", **outkw
        )

        plt.legend(**legkw)

        # plot the CDFs
        plt.sca(ax_cdf)
        plt.ylabel("Cumulative Flux (from left)")

        # plot the original CDF
        plt.plot(xin_for_cdf, cdfin, label="Original Pixels", **inkw)

        # plot the interpolated CDF
        plt.plot(xout_for_cdf, cdfout, label="Flux-Conserved Resample", **outkw)
        if pause:
            a = input(
                "Pausing a moment to check on interpolation; press return to continue."
            )

        print("{:>6} = {:.5f}".format("Actual", np.sum(yin)))
        print(
            "{:>6} = {:.5f}".format(
                "Silly",
                np.sum(interpolate_badly(xout) * xoutbinsize),
            )
        )
        print("{:>6} = {:.5f}".format("CDF", np.sum(yout)))

    # return the resampled y-values
    return {"x": xout, "x_edge_lower": xout_left, "x_edge_upper": xout_right, "y": yout}

expand_filenames(filepath) #

A wrapper to expand a string or list into a list of filenames.

Source code in chromatic/imports.py
 96
 97
 98
 99
100
101
102
103
104
105
106
def expand_filenames(filepath):
    """
    A wrapper to expand a string or list into a list of filenames.
    """
    if type(filepath) == list:
        filenames = filepath
    elif "*" in filepath:
        filenames = np.sort(glob.glob(filepath))
    else:
        filenames = [filepath]
    return sorted(filenames)

name2color(name) #

Return the 3-element RGB array of a given color name.

Parameters#

name : str The name of a color

Returns#

rgb : tuple 3-element RGB color, with numbers from 0.0 to 1.0

Source code in chromatic/imports.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
def name2color(name):
    """
    Return the 3-element RGB array of a given color name.

    Parameters
    ----------
    name : str
        The name of a color

    Returns
    -------
    rgb : tuple
        3-element RGB color, with numbers from 0.0 to 1.0
    """

    # give a friendly warning if the color name can't be found
    try:
        color_hex = col.cnames[name]
        return col.hex2color(color_hex)
    except KeyError:
        cheerfully_suggest(f"The color {name} can't be found. (Returning black.)")
        return (0.0, 0.0, 0.0)

one2another(bottom='white', top='red', alpha_bottom=1.0, alpha_top=1.0, N=256) #

Create a cmap that goes smoothly (linearly in RGBA) from "bottom" to "top".

Parameters#

bottom : str Name of a color for the bottom of cmap (0.0) top : str Name of a color for the top of the cmap (1.0) alpha_bottom : float Opacity at the bottom of the cmap alpha_top : float Opacitiy at the top of the cmap N : int The number of levels in the listed color map

Returns#

cmap : Colormap A color map that goes linearly from the bottom to top color (and alpha).

Source code in chromatic/imports.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
def one2another(bottom="white", top="red", alpha_bottom=1.0, alpha_top=1.0, N=256):
    """
    Create a cmap that goes smoothly (linearly in RGBA) from "bottom" to "top".

    Parameters
    ----------
    bottom : str
        Name of a color for the bottom of cmap (0.0)
    top : str
        Name of a color for the top of the cmap (1.0)
    alpha_bottom : float
        Opacity at the bottom of the cmap
    alpha_top : float
        Opacitiy at the top of the cmap
    N : int
        The number of levels in the listed color map

    Returns
    -------
    cmap : Colormap
        A color map that goes linearly from the
        bottom to top color (and alpha).
    """

    # get the RGB values of the bottom and top of the cmap
    rgb_bottom, rgb_top = name2color(bottom), name2color(top)

    # create linear gradients for all four RGBA channels
    r = np.linspace(rgb_bottom[0], rgb_top[0], N)
    g = np.linspace(rgb_bottom[1], rgb_top[1], N)
    b = np.linspace(rgb_bottom[2], rgb_top[2], N)
    a = np.linspace(alpha_bottom, alpha_top, N)

    # create (N,4) array + populate a listed colormap
    colors = np.transpose(np.vstack([r, g, b, a]))
    cmap = col.ListedColormap(colors, name="{bottom}2{top}".format(**locals()))

    # return the colormap
    return cmap

remove_unit(x) #

Quick wrapper to remove the unit from a quantity, but not complain if it doesn't have one.

Source code in chromatic/imports.py
174
175
176
177
178
179
180
181
182
def remove_unit(x):
    """
    Quick wrapper to remove the unit from a quantity,
    but not complain if it doesn't have one.
    """
    try:
        return x.value
    except AttributeError:
        return x