\[ \begin{eqnarray*} z_j &=& \sum_{i} w_{i,j} a'_{i} + b_j\\ \textrm{weights}&& w_{i,j}\\ \textrm{bias}&& b_j \end{eqnarray*} \]
… or equivalently
\(\begin{eqnarray} \sigma^{-1}(p)&=&\log\left(\frac{p}{1-p}\right) =\\ logit(p) &=& \sum_{i}\beta_{i} x_{i} + \alpha \end{eqnarray}\)
Let inputs be:
\(\begin{eqnarray} a'_1&=&1\\ a'_2&=&0\\ a'_3&=&1 \end{eqnarray}\)
and we have
\(\begin{eqnarray} z_1 &=& \sum_i w_{i,1}a'_i + b_1\\ a_1 &=& \sigma(z_1) \end{eqnarray}\)
\(z_1 = 0.3 \times 1 + 0.8 \times 0 + 0.2 \times 1 - 0.5 =\) \(0\)
\(a_1 = \sigma(z_1) = \frac{1}{1+e^{-0}} =\) \(0.5\)
“Columns” of 1-many neurons
A single Input layer
1-many Hidden layer(s)
A single Output layer
Other drawing style, omitting \(w\) and \(b\).
Often layers are ‘boxed’
layers w >1 dimension (e.g., images) – (messy!)
Simplify! nodes and arrows implicit
Collect similar layers into ‘blocks’
Also other type of layers/blocks (cf. coming lectures)