This tutorial was generated from an IPython notebook that can be downloaded here.

theano version: 1.0.4
pymc3 version: 3.6
exoplanet version: 0.1.5

Gaussian process models for stellar variability

When fitting exoplanets, we also need to fit for the stellar variability and Gaussian Processes (GPs) are often a good descriptive model for this variation. PyMC3 has support for all sorts of general GP models, but exoplanet includes support for scalable 1D GPs (see gp for more info) that can work with large datasets. In this tutorial, we go through the process of modeling the light curve of a rotating star observed by Kepler using exoplanet.

First, let’s download and plot the data:

import numpy as np
import matplotlib.pyplot as plt
from import fits

url = ""
with as hdus:
    data = hdus[1].data

x = data["TIME"]
y = data["PDCSAP_FLUX"]
yerr = data["PDCSAP_FLUX_ERR"]
m = (data["SAP_QUALITY"] == 0) & np.isfinite(x) & np.isfinite(y)

x = np.ascontiguousarray(x[m], dtype=np.float64)
y = np.ascontiguousarray(y[m], dtype=np.float64)
yerr = np.ascontiguousarray(yerr[m], dtype=np.float64)
mu = np.mean(y)
y = (y / mu - 1) * 1e3
yerr = yerr * 1e3 / mu

plt.plot(x, y, "k")
plt.xlim(x.min(), x.max())
plt.xlabel("time [days]")
plt.ylabel("relative flux [ppt]")
plt.title("KIC 5809890");

A Gaussian process model for stellar variability

This looks like the light curve of a rotating star, and it has been shown that it is possible to model this variability by using a quasiperiodic Gaussian process. To start with, let’s get an estimate of the rotation period using the Lomb-Scargle periodogram:

import exoplanet as xo

results = xo.estimators.lomb_scargle_estimator(
    x, y, max_peaks=1, min_period=5.0, max_period=100.0,

peak = results["peaks"][0]
freq, power = results["periodogram"]
plt.plot(-np.log10(freq), power, "k")
plt.axvline(np.log10(peak["period"]), color="k", lw=4, alpha=0.3)
plt.xlim((-np.log10(freq)).min(), (-np.log10(freq)).max())

Now, using this initialization, we can set up the GP model in exoplanet. We’ll use the kernel that is a mixture of two simple harmonic oscillators with periods separated by a factor of two. As you can see from the periodogram above, this might be a good model for this light curve and I’ve found that it works well in many cases.

import pymc3 as pm
import theano.tensor as tt

with pm.Model() as model:

    # The mean flux of the time series
    mean = pm.Normal("mean", mu=0.0, sd=10.0)

    # A jitter term describing excess white noise
    logs2 = pm.Normal("logs2", mu=2*np.log(np.min(yerr)), sd=5.0)

    # The parameters of the RotationTerm kernel
    logamp = pm.Normal("logamp", mu=np.log(np.var(y)), sd=5.0)
    logperiod = pm.Normal("logperiod", mu=np.log(peak["period"]), sd=5.0)
    logQ0 = pm.Normal("logQ0", mu=1.0, sd=10.0)
    logdeltaQ = pm.Normal("logdeltaQ", mu=2.0, sd=10.0)
    mix = pm.Uniform("mix", lower=0, upper=1.0)

    # Track the period as a deterministic
    period = pm.Deterministic("period", tt.exp(logperiod))

    # Set up the Gaussian Process model
    kernel =
    gp =, x, yerr**2 + tt.exp(logs2), J=4)

    # Compute the Gaussian Process likelihood and add it into the
    # the PyMC3 model as a "potential"
    pm.Potential("loglike", gp.log_likelihood(y - mean))

    # Compute the mean model prediction for plotting purposes
    pm.Deterministic("pred", gp.predict())

    # Optimize to find the maximum a posteriori parameters
    map_soln = xo.optimize(start=model.test_point)
optimizing logp for variables: ['mix_interval__', 'logdeltaQ', 'logQ0', 'logperiod', 'logamp', 'logs2', 'mean']
message: Optimization terminated successfully.
logp: 515.8061433750984 -> 692.7159093512395

Now that we have the model set up, let’s plot the maximum a posteriori model prediction.

plt.plot(x, y, "k", label="data")
plt.plot(x, map_soln["pred"], color="C1", label="model")
plt.xlim(x.min(), x.max())
plt.xlabel("time [days]")
plt.ylabel("relative flux [ppt]")
plt.title("KIC 5809890; map model");

That looks pretty good! Now let’s sample from the posterior using a exoplanet.PyMC3Sampler.

sampler = xo.PyMC3Sampler(finish=200)
with model:
    sampler.tune(tune=2000, start=map_soln, step_kwargs=dict(target_accept=0.9))
    trace = sampler.sample(draws=2000)
Sampling 4 chains: 100%|██████████| 308/308 [00:26<00:00,  3.07draws/s]
Sampling 4 chains: 100%|██████████| 108/108 [00:06<00:00, 17.79draws/s]
Sampling 4 chains: 100%|██████████| 208/208 [00:03<00:00, 61.30draws/s]
Sampling 4 chains: 100%|██████████| 408/408 [00:06<00:00, 61.59draws/s]
Sampling 4 chains: 100%|██████████| 808/808 [00:11<00:00, 72.43draws/s]
Sampling 4 chains: 100%|██████████| 1608/1608 [00:24<00:00, 65.44draws/s]
Sampling 4 chains: 100%|██████████| 4608/4608 [01:00<00:00, 24.49draws/s]
Sampling 4 chains: 100%|██████████| 808/808 [00:13<00:00, 62.03draws/s]
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mix, logdeltaQ, logQ0, logperiod, logamp, logs2, mean]
Sampling 4 chains: 100%|██████████| 8000/8000 [01:32<00:00, 20.91draws/s]
There was 1 divergence after tuning. Increase target_accept or reparameterize.
There were 3 divergences after tuning. Increase target_accept or reparameterize.
There were 3 divergences after tuning. Increase target_accept or reparameterize.
There were 2 divergences after tuning. Increase target_accept or reparameterize.
The number of effective samples is smaller than 25% for some parameters.

Now we can do the usual convergence checks:

pm.summary(trace, varnames=["mix", "logdeltaQ", "logQ0", "logperiod", "logamp", "logs2", "mean"])
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
mix 0.637563 0.245938 0.005724 0.187067 0.999221 1822.199839 1.000120
logdeltaQ 1.936896 9.648307 0.203627 -17.216046 21.440275 2541.504784 1.000209
logQ0 0.557875 0.533729 0.008589 -0.450243 1.639260 3028.111015 0.999833
logperiod 3.344167 0.119175 0.003061 3.146420 3.615142 1310.401389 1.000515
logamp 0.406268 0.580945 0.011413 -0.633788 1.550076 2232.659871 1.000023
logs2 -4.968278 0.124893 0.001894 -5.208997 -4.721735 5170.426841 1.000211
mean -0.022485 0.218446 0.003430 -0.448668 0.430058 4340.080494 1.000882

And plot the posterior distribution over rotation period:

period_samples = trace["period"]
bins = np.linspace(20, 45, 40)
plt.hist(period_samples, bins, histtype="step", color="k")
plt.xlim(bins.min(), bins.max())
plt.xlabel("rotation period [days]")
plt.ylabel("posterior density");