Skip to content

Commit c23f174

Browse files
Merge pull request #459 from ArnoStrouwen/docs
LanguageTool
2 parents 25d95f1 + e1fa015 commit c23f174

31 files changed

+94
-94
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Aqua = "0.8"
1919
Cubature = "1.5"
2020
Distributions = "0.25.71"
2121
ExtendableSparse = "1"
22-
Flux = "0.14"
22+
Flux = "0.13, 0.14"
2323
ForwardDiff = "0.10.19"
2424
GLM = "1.5"
2525
IterativeSolvers = "0.9"

docs/src/BraninFunction.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
# Branin Function
22

3-
The Branin Function is commonly used as a test function for metamodelling in computer experiments, especially in the context of optimization.
3+
The Branin function is commonly used as a test function for metamodelling in computer experiments, especially in the context of optimization.
44

55
The expression of the Branin Function is given as:
66
``f(x) = (x_2 - \frac{5.1}{4\pi^2}x_1^{2} + \frac{5}{\pi}x_1 - 6)^2 + 10(1-\frac{1}{8\pi})\cos(x_1) + 10``
77

88
where ``x = (x_1, x_2)`` with ``-5\leq x_1 \leq 10, 0 \leq x_2 \leq 15``
99

10-
First of all we will import these two packages `Surrogates` and `Plots`.
10+
First of all, we will import these two packages: `Surrogates` and `Plots`.
1111

1212
```@example BraninFunction
1313
using Surrogates
@@ -50,7 +50,7 @@ scatter!(xs, ys)
5050
plot(p1, p2, title="True function")
5151
```
5252

53-
Now it's time to try fitting different surrogates and then we will plot them.
53+
Now it's time to try fitting different surrogates, and then we will plot them.
5454
We will have a look at the radial basis surrogate `Radial Basis Surrogate`. :
5555

5656
```@example BraninFunction
@@ -65,7 +65,7 @@ scatter!(xs, ys, marker_z=zs)
6565
plot(p1, p2, title="Radial Surrogate")
6666
```
6767

68-
Now, we will have a look on `Inverse Distance Surrogate`:
68+
Now, we will have a look at `Inverse Distance Surrogate`:
6969
```@example BraninFunction
7070
InverseDistance = InverseDistanceSurrogate(xys, zs, lower_bound, upper_bound)
7171
```

docs/src/InverseDistance.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
The **Inverse Distance Surrogate** is an interpolating method and in this method the unknown points are calculated with a weighted average of the sampling points. This model uses the inverse distance between the unknown and training points to predict the unknown point. We do not need to fit this model because the response of an unknown point x is computed with respect to the distance between x and the training points.
1+
The **Inverse Distance Surrogate** is an interpolating method, and in this method, the unknown points are calculated with a weighted average of the sampling points. This model uses the inverse distance between the unknown and training points to predict the unknown point. We do not need to fit this model because the response of an unknown point x is computed with respect to the distance between x and the training points.
22

33
Let's optimize the following function to use Inverse Distance Surrogate:
44

@@ -53,7 +53,7 @@ plot!(InverseDistance, label="Surrogate function", xlims=(lower_bound, upper_bo
5353

5454
Having built a surrogate, we can now use it to search for minima in our original function `f`.
5555

56-
To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as optimization technique and again Sobol sampling as sampling technique.
56+
To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as the optimization technique and again Sobol sampling as the sampling technique.
5757

5858
```@example Inverse_Distance1D
5959
@show surrogate_optimize(f, SRBF(), lower_bound, upper_bound, InverseDistance, SobolSample())
@@ -65,7 +65,7 @@ plot!(InverseDistance, label="Surrogate function", xlims=(lower_bound, upper_bo
6565

6666
## Inverse Distance Surrogate Tutorial (ND):
6767

68-
First of all we will define the `Schaffer` function we are going to build surrogate for. Notice, one how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.
68+
First of all we will define the `Schaffer` function we are going to build a surrogate for. Notice, how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.
6969

7070
```@example Inverse_DistanceND
7171
using Plots # hide
@@ -84,7 +84,7 @@ end
8484

8585
### Sampling
8686

87-
Let's define our bounds, this time we are working in two dimensions. In particular we want our first dimension `x` to have bounds `-5, 10`, and `0, 15` for the second dimension. We are taking 60 samples of the space using Sobol Sequences. We then evaluate our function on all of the sampling points.
87+
Let's define our bounds, this time we are working in two dimensions. In particular we want our first dimension `x` to have bounds `-5, 10`, and `0, 15` for the second dimension. We are taking 60 samples of the space using Sobol Sequences. We then evaluate our function on all the sampling points.
8888

8989
```@example Inverse_DistanceND
9090
n_samples = 60
@@ -124,7 +124,7 @@ plot(p1, p2, title="Surrogate") # hide
124124

125125

126126
### Optimizing
127-
With our surrogate we can now search for the minima of the function.
127+
With our surrogate, we can now search for the minima of the function.
128128

129129
Notice how the new sampled points, which were created during the optimization process, are appended to the `xys` array.
130130
This is why its size changes.

docs/src/LinearSurrogate.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ plot!(f, label="True function", xlims=(lower_bound, upper_bound))
2828

2929
## Building a Surrogate
3030

31-
With our sampled points we can build the **Linear Surrogate** using the `LinearSurrogate` function.
31+
With our sampled points, we can build the **Linear Surrogate** using the `LinearSurrogate` function.
3232

3333
We can simply calculate `linear_surrogate` for any value.
3434

@@ -51,7 +51,7 @@ plot!(my_linear_surr_1D, label="Surrogate function", xlims=(lower_bound, upper_
5151

5252
Having built a surrogate, we can now use it to search for minima in our original function `f`.
5353

54-
To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as optimization technique and again Sobol sampling as sampling technique.
54+
To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as the optimization technique and again Sobol sampling as the sampling technique.
5555

5656
```@example linear_surrogate1D
5757
@show surrogate_optimize(f, SRBF(), lower_bound, upper_bound, my_linear_surr_1D, SobolSample())
@@ -63,7 +63,7 @@ plot!(my_linear_surr_1D, label="Surrogate function", xlims=(lower_bound, upper_
6363

6464
## Linear Surrogate tutorial (ND)
6565

66-
First of all we will define the `Egg Holder` function we are going to build surrogate for. Notice, one how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.
66+
First of all we will define the `Egg Holder` function we are going to build a surrogate for. Notice, one how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.
6767

6868
```@example linear_surrogateND
6969
using Plots # hide
@@ -104,7 +104,7 @@ plot(p1, p2, title="True function") # hide
104104
```
105105

106106
### Building a surrogate
107-
Using the sampled points we build the surrogate, the steps are analogous to the 1-dimensional case.
107+
Using the sampled points, we build the surrogate, the steps are analogous to the 1-dimensional case.
108108

109109
```@example linear_surrogateND
110110
my_linear_ND = LinearSurrogate(xys, zs, lower_bound, upper_bound)
@@ -119,7 +119,7 @@ plot(p1, p2, title="Surrogate") # hide
119119
```
120120

121121
### Optimizing
122-
With our surrogate we can now search for the minima of the function.
122+
With our surrogate, we can now search for the minima of the function.
123123

124124
Notice how the new sampled points, which were created during the optimization process, are appended to the `xys` array.
125125
This is why its size changes.

docs/src/Salustowicz.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), legend=:
3939
plot!(xs, salustowicz.(xs), label="True function", legend=:top)
4040
```
4141

42-
Now, let's fit Salustowicz Function with different Surrogates:
42+
Now, let's fit the Salustowicz function with different surrogates:
4343

4444
```@example salustowicz1D
4545
InverseDistance = InverseDistanceSurrogate(x, y, lower_bound, upper_bound)

docs/src/abstractgps.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# Gaussian Process Surrogate Tutorial
22

33
!!! note
4-
This surrogate requires the 'SurrogatesAbstractGPs' module which can be added by inputting "]add SurrogatesAbstractGPs" from the Julia command line.
4+
This surrogate requires the 'SurrogatesAbstractGPs' module, which can be added by inputting "]add SurrogatesAbstractGPs" from the Julia command line.
55

66
Gaussian Process regression in Surrogates.jl is implemented as a simple wrapper around the [AbstractGPs.jl](https://github.com/JuliaGaussianProcesses/AbstractGPs.jl) package. AbstractGPs comes with a variety of covariance functions (kernels). See [KernelFunctions.jl](https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/) for examples.
77

88
!!! tip
9-
The examples below demonstrate the use of AbstractGPs with out-of-the-box settings without hyperparameter optimization (i.e. without changing parameters like lengthscale, signal variance and noise variance.) Beyond hyperparameter optimization, careful initialization of hyperparameters and priors on the parameters is required for this surrogate to work properly. For more details on how to fit GPs in practice, check out [A Practical Guide to Gaussian Processes](https://infallible-thompson-49de36.netlify.app/).
9+
The examples below demonstrate the use of AbstractGPs with out-of-the-box settings without hyperparameter optimization (i.e. without changing parameters like lengthscale, signal variance, and noise variance). Beyond hyperparameter optimization, careful initialization of hyperparameters and priors on the parameters is required for this surrogate to work properly. For more details on how to fit GPs in practice, check out [A Practical Guide to Gaussian Processes](https://infallible-thompson-49de36.netlify.app/).
1010

1111
Also see this [example](https://juliagaussianprocesses.github.io/AbstractGPs.jl/stable/examples/1-mauna-loa/#Hyperparameter-Optimization) to understand hyperparameter optimization with AbstractGPs.
1212
## 1D Example

docs/src/ackley.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,4 +64,4 @@ plot!(xs, ackley.(xs), label="True function", legend=:top)
6464
plot!(xs, my_rad.(xs), label="Radial basis optimized", legend=:top)
6565
```
6666

67-
The DYCORS methods successfully finds the minimum.
67+
The DYCORS method successfully finds the minimum.

docs/src/cantilever.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ plot(p1, p2, title="True function")
4242
```
4343

4444

45-
Fitting different Surrogates:
45+
Fitting different surrogates:
4646
```@example beam
4747
mypoly = PolynomialChaosSurrogate(xys, zs, lb, ub)
4848
loba = LobachevskySurrogate(xys, zs, lb, ub)

docs/src/gek.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## Gradient Enhanced Kriging
22

3-
Gradient-enhanced Kriging is an extension of kriging which supports gradient information. GEK is usually more accurate than kriging, however, it is not computationally efficient when the number of inputs, the number of sampling points, or both, are high. This is mainly due to the size of the corresponding correlation matrix that increases proportionally with both the number of inputs and the number of sampling points.
3+
Gradient-enhanced Kriging is an extension of kriging which supports gradient information. GEK is usually more accurate than kriging. However, it is not computationally efficient when the number of inputs, the number of sampling points, or both, are high. This is mainly due to the size of the corresponding correlation matrix, which increases proportionally with both the number of inputs and the number of sampling points.
44

55
Let's have a look at the following function to use Gradient Enhanced Surrogate:
66
``f(x) = sin(x) + 2*x^2``
@@ -15,7 +15,7 @@ default()
1515

1616
### Sampling
1717

18-
We choose to sample f in 8 points between 0 to 1 using the `sample` function. The sampling points are chosen using a Sobol sequence, this can be done by passing `SobolSample()` to the `sample` function.
18+
We choose to sample f in 8 points between 0 and 1 using the `sample` function. The sampling points are chosen using a Sobol sequence, this can be done by passing `SobolSample()` to the `sample` function.
1919

2020
```@example GEK1D
2121
n_samples = 10
@@ -34,7 +34,7 @@ plot!(f, label="True function", xlims=(lower_bound, upper_bound), legend=:top)
3434

3535
### Building a surrogate
3636

37-
With our sampled points we can build the Gradient Enhanced Kriging surrogate using the `GEK` function.
37+
With our sampled points, we can build the Gradient Enhanced Kriging surrogate using the `GEK` function.
3838

3939
```@example GEK1D
4040
@@ -47,7 +47,7 @@ plot!(my_gek, label="Surrogate function", ribbon=p->std_error_at_point(my_gek, p
4747

4848
## Gradient Enhanced Kriging Surrogate Tutorial (ND)
4949

50-
First of all let's define the function we are going to build a surrogate for.
50+
First of all, let's define the function we are going to build a surrogate for.
5151

5252
```@example GEK_ND
5353
using Plots # hide
@@ -69,7 +69,7 @@ end
6969

7070
### Sampling
7171

72-
Let's define our bounds, this time we are working in two dimensions. In particular we want our first dimension `x` to have bounds `0, 10`, and `0, 10` for the second dimension. We are taking 80 samples of the space using Sobol Sequences. We then evaluate our function on all of the sampling points.
72+
Let's define our bounds, this time we are working in two dimensions. In particular, we want our first dimension `x` to have bounds `0, 10`, and `0, 10` for the second dimension. We are taking 80 samples of the space using Sobol Sequences. We then evaluate our function on all the sampling points.
7373

7474
```@example GEK_ND
7575
n_samples = 45
@@ -91,7 +91,7 @@ plot(p1, p2, title="True function") # hide
9191
```
9292

9393
### Building a surrogate
94-
Using the sampled points we build the surrogate, the steps are analogous to the 1-dimensional case.
94+
Using the sampled points, we build the surrogate, the steps are analogous to the 1-dimensional case.
9595

9696
```@example GEK_ND
9797
grad1 = x1 -> 2*(300*(x[1])^5 - 300*(x[1])^2*x[2] + x[1] -1)

docs/src/gekpls.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## GEKPLS Surrogate Tutorial
22

3-
Gradient Enhanced Kriging with Partial Least Squares Method (GEKPLS) is a surrogate modelling technique that brings down computation time and returns improved accuracy for high-dimensional problems. The Julia implementation of GEKPLS is adapted from the Python version by [SMT](https://github.com/SMTorg) which is based on this [paper](https://arxiv.org/pdf/1708.02663.pdf).
3+
Gradient Enhanced Kriging with Partial Least Squares Method (GEKPLS) is a surrogate modeling technique that brings down computation time and returns improved accuracy for high-dimensional problems. The Julia implementation of GEKPLS is adapted from the Python version by [SMT](https://github.com/SMTorg) which is based on this [paper](https://arxiv.org/pdf/1708.02663.pdf).
44

55
The following are the inputs when building a GEKPLS surrogate:
66

0 commit comments

Comments
 (0)