Skip to content

Commit 21ffe1f

Browse files
fix @example usage in docs
1 parent bc3b51f commit 21ffe1f

File tree

2 files changed

+10
-2
lines changed

2 files changed

+10
-2
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name = "Surrogates"
22
uuid = "6fc51010-71bc-11e9-0e15-a3fcc6593c49"
33
authors = ["SciML"]
4-
version = "6.1.0"
4+
version = "6.1.1"
55

66
[deps]
77
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"

docs/src/tutorials.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
Let's start with something easy to get our hands dirty.
33
I want to build a surrogate for ``f(x) = \log(x) \cdot x^2+x^3``.
44
Let's choose the radial basis surrogate.
5+
56
```@example
67
using Surrogates
78
f = x -> log(x)*x^2+x^3
@@ -14,7 +15,9 @@ my_radial_basis = RadialBasis(x,y,lb,ub)
1415
#I want an approximation at 5.4
1516
approx = my_radial_basis(5.4)
1617
```
18+
1719
Let's now see an example in 2D.
20+
1821
```@example
1922
using Surrogates
2023
using LinearAlgebra
@@ -34,6 +37,7 @@ Let's now use the Kriging surrogate, which is a single-output Gaussian process.
3437
This surrogate has a nice feature: not only does it approximate the solution at a
3538
point, it also calculates the standard error at such point.
3639
Let's see an example:
40+
3741
```@example kriging
3842
using Surrogates
3943
f = x -> exp(x)*x^2+x^3
@@ -52,16 +56,19 @@ std_err = std_error_at_point(my_krig,5.4)
5256
```
5357

5458
Let's now optimize the Kriging surrogate using Lower confidence bound method, this is just a one-liner:
59+
5560
```@example kriging
5661
surrogate_optimize(f,LCBS(),lb,ub,my_krig,UniformSample())
5762
```
63+
5864
Surrogate optimization methods have two purposes: they both sample the space in unknown regions and look for the minima at the same time.
5965

6066
## Lobachevsky integral
6167
The Lobachevsky surrogate has the nice feature of having a closed formula for its
6268
integral, which is something that other surrogates are missing.
6369
Let's compare it with QuadGK:
64-
```@examples
70+
71+
```@example
6572
using Surrogates
6673
using QuadGK
6774
obj = x -> 3*x + log(x)
@@ -83,6 +90,7 @@ int_val_true = int[1]-int[2]
8390

8491
## Example of NeuralSurrogate
8592
Basic example of fitting a neural network on a simple function of two variables.
93+
8694
```@example
8795
using Surrogates
8896
using Flux

0 commit comments

Comments
 (0)