Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2021-09-06 | 1.3 kB | |
v0.7.3 source code.tar.gz | 2021-09-06 | 94.2 MB | |
v0.7.3 source code.zip | 2021-09-06 | 94.3 MB | |
Totals: 3 Items | 188.5 MB | 0 |
- Ensemble deep kernel learning (DKL) as an 'approximation' to the fully Bayesian DKL
- Thompson sampler for active learning now comes as a built-in method in the DKL class
- Option to select between correlated and independent outputs for vector-valued function in DKL
Example of using an ensemble of DKL models:
:::python
# Initialize and train ensemble of models
dklgp = aoi.models.dklGPR(indim=X_train.shape[-1], embedim=2)
dklgp.fit_ensemble(X_train, y_train, n_models=5, training_cycles=1500, lr=0.01)
# Make a prediction
y_samples = dklgp.sample_from_posterior(X_test, num_samples=1000) # n_models x n_samples x n_data
y_pred = y_samples.mean(axis=(0,1)) # average over model and sample dimensions
y_var = y_samples.var(axis=(0,1))
Example of using a built-in Thompson sampler for active learning:
:::python
for e in range(exploration_steps):
# obtain/update DKL-GP posterior
dklgp = aoi.models.dklGPR(data_dim, embedim=2, precision="single")
dklgp.fit(X_train, y_train, training_cycles=50)
# Thompson sampling for selecting the next measurement/evaluation point
obj, next_point = dklgp.thompson(X_cand)
# Perform a 'measurement'
y_measured = measure(next_point)
# Update measured and candidate points, etc...