Skip to content

Commit e8f191b

Browse files
committed
slides
1 parent bdfc73f commit e8f191b

File tree

6 files changed

+3
-3
lines changed

6 files changed

+3
-3
lines changed
-7 Bytes
Binary file not shown.

docs/lectures/autoencoders.pdf

-14 Bytes
Binary file not shown.
-3 Bytes
Binary file not shown.
-7 Bytes
Binary file not shown.
-14 Bytes
Binary file not shown.

lecture-source/16-autoencoders/autoencoders.tex

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@
266266
Consider the single-hidden layer linear autoencoder network given by:
267267

268268
\begin{align*}
269-
h &= \bm{\mathrm{W}}_e \bm{x} + \bm{b}_e \\
269+
z &= \bm{\mathrm{W}}_e \bm{x} + \bm{b}_e \\
270270
r &= \bm{\mathrm{W}}_d \bm{z} + \bm{b}_d
271271
\end{align*}
272272

@@ -371,7 +371,7 @@
371371
\begin{itemize}
372372
\item<+-> In a sparse autoencoder, there can be more hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time.
373373
\item<+-> This is simply achieved with a regularised loss function: $\ell = \ell_{mse} + \Omega(\bm{z})$
374-
\item<+-> A popular choice that you've seen before would be to use an l1 penalty $\Omega(\bm{z}) = \lambda \sum_i | h_i |$
374+
\item<+-> A popular choice that you've seen before would be to use an l1 penalty $\Omega(\bm{z}) = \lambda \sum_i | z_i |$
375375
\begin{itemize}
376376
\item<+-> this of course does have a slight problem... what is the derivative of $y=|x|$ with respect to $x$ at $x=0$?
377377
\end{itemize}
@@ -388,7 +388,7 @@
388388
\item<+-> Denoising can help generalise over the test set since the data is distorted by adding noise.
389389
\end{itemize}
390390
\item<+-> Pretraining networks
391-
\item<+-> Anomoly Detection
391+
\item<+-> Anomaly Detection
392392
\item<+-> Machine translation
393393
\item<+-> Semantic segmentation
394394
\end{itemize}

0 commit comments

Comments
 (0)