stevenmiller888.github.io - RSS Follow @stevenmiller888









Search Preview

Steven Miller

stevenmiller888.github.io
Steven Miller2017-06-28T04:30:37.786Zhttp://stevenmiller888.github.comSteven MillerIntruder: How to crack Wi-Fi networks in Node.jshttp://stevenmiller888.g
.io > stevenmiller888.github.io

SEO audit: Content analysis

Language Error! No language localisation is found.
Title Steven Miller
Text / HTML ratio 96 %
Frame Excellent! The website does not use iFrame solutions.
Flash Excellent! The website does not have any flash contents.
Keywords cloud = output weights hidden sum alt=>

network

neural function layer input change propagation networks var activation forward back

Keywords consistency
Keyword Content Title Description Headings
= 86
output 45
weights 40
hidden 40
39
sum 29
Headings Error! The website does not use (H) tags.
Images We found 0 images on this web page.

SEO Keywords (Single)

Keyword Occurrence Density
= 86 4.30 %
output 45 2.25 %
weights 40 2.00 %
hidden 40 2.00 %
39 1.95 %
sum 29 1.45 %
alt=>

29 1.45 %
network 29 1.45 %

29 1.45 %
neural 25 1.25 %
function 24 1.20 %
layer 21 1.05 %
input 19 0.95 %
change 19 0.95 %
propagation 16 0.80 %
networks 16 0.80 %
var 15 0.75 %
activation 14 0.70 %
forward 14 0.70 %
back 14 0.70 %

SEO Keywords (Two Word)

Keyword Occurrence Density
of the 37 1.85 %
in the 19 0.95 %
to the 18 0.90 %
the output 17 0.85 %
the hidden 15 0.75 %
hidden layer 15 0.75 %
change in 14 0.70 %
output sum 13 0.65 %
neural network 12 0.60 %
the input 11 0.55 %
from the 10 0.50 %
the weights 10 0.50 %
neural networks 10 0.50 %
how to 9 0.45 %
the activation 9 0.45 %
the network 9 0.45 %
with the 9 0.45 %
activation function 9 0.45 %
weights = 8 0.40 %
is the 8 0.40 %

SEO Keywords (Three Word)

Keyword Occurrence Density Possible Spam
change in the 10 0.50 % No
the hidden layer 8 0.40 % No
the activation function 8 0.40 % No
a neural network 7 0.35 % No
the change in 7 0.35 % No
of the array 6 0.30 % No
the output sum 6 0.30 % No
Delta weights = 6 0.30 % No
set of weights 5 0.25 % No
in the output 5 0.25 % No
weights between the 5 0.25 % No
of the output 5 0.25 % No
how to build 5 0.25 % No
front of the 4 0.20 % No
the front of 4 0.20 % No
hidden sum = 4 0.20 % No
1 1 1 4 0.20 % No
the left shift 4 0.20 % No
side of the 4 0.20 % No
of the keyboard 4 0.20 % No

SEO Keywords (Four Word)

Keyword Occurrence Density Possible Spam
change in the output 5 0.25 % No
the change in the 5 0.25 % No
keyboard corresponding to the 4 0.20 % No
the weights between the 4 0.20 % No
forward and back propagation 4 0.20 % No
of the keyboard corresponding 4 0.20 % No
the keyboard corresponding to 4 0.20 % No
the front of the 4 0.20 % No
front of the array 4 0.20 % No
side of the keyboard 4 0.20 % No
going to explain how 4 0.20 % No
the left shift key 4 0.20 % No
the product of the 3 0.15 % No
1 1 1 1 3 0.15 % No
build a neural network 3 0.15 % No
determine the change in 3 0.15 % No
how to build your 3 0.15 % No
 

Note that

3 0.15 % No
Delta hidden sum = 3 0.15 % No
in the output sum 3 0.15 % No

Stevenmiller888.github.io Spined HTML


Steven Miller 2017-06-28T04:30:37.786Z http://stevenmiller888.github.com Steven Miller Intruder: How to one-liner Wi-Fi networks in Node.js http://stevenmiller888.github.com/intruder-cracking-wifi-networks-in-node 2015-09-25T00:00:00.000Z Steven Miller <p>I’m going to explain how to use <a href="https://github.com/stevenmiller888/intruder">Intruder</a> to one-liner a Wi-Fi network in Node.js. Then, I’m going to explain how it works at a high-level.</p> <p>I’m going to explain how to use <a href="https://github.com/stevenmiller888/intruder">Intruder</a> to one-liner a Wi-Fi network in Node.js. Then, I’m going to explain how it works at a high-level.</p> <p>Start by finding the name of the network you want to crack. In this case, we’ll use an wrong-headed network named “Home”. Then, you’ll want to <code>require</code> Intruder, initialize it, and undeniability the <code>crack</code> function:</p> <pre><code>var Intruder = require(&#39;intruder&#39;); var intruder = Intruder(); intruder.crack(&#39;Home&#39;, function(err, key) { if (err) throw new Error(err); console.log(key); }); </code></pre><p>That’s it. Sort of. It turns out it might take some time for Intruder to one-liner the network. So maybe you want to monitor it’s progress? Here’s how to do that:</p> <pre><code>var Intruder = require(&#39;intruder&#39;); Intruder() .on(&#39;attempt&#39;, function(ivs) { console.log(ivs); }) .crack(&#39;Home&#39;, function(err, key) { if (err) throw new Error(err); console.log(key); }); </code></pre><p>Now, I’ll explain how it works:</p> <ol> <li><p>When you undeniability <code>intruder.crack</code>, first we squint up all the wireless networks in range. Then, we filter them out to find the network that you passed in.</p> </li> <li><p>After finding the specific network, we start sniffing network packets on the network channel.</p> </li> <li><p>Sniffing packets will generate a <code>capture</code> file that contains information well-nigh the captured packets. We find that file and then pass the file into <a href="https://github.com/aircrack-ng/aircrack-ng">aircrack</a>, which will struggle to decrypt it. You usually need at least 80,000 <a href="https://en.wikipedia.org/wiki/Initialization_vector">IVs</a>, equal to aircrack’s documentation.</p> </li> </ol> <p>If you have any questions or comments, don’t hesitate to find me on <a href="https://www.twitter.com/stevenmiller888">twitter</a>.</p> Mind: How to Build a Neural Network (Part Two) http://stevenmiller888.github.com/mind-how-to-build-a-neural-network-part-2 2015-08-14T00:00:00.000Z Steven Miller <p><em>In this second part on learning how to build a neural network, we will swoop into the implementation of a flexible library in JavaScript. In specimen you missed it, here is <a href="/mind-how-to-build-a-neural-network">Part One</a>, which goes over what neural networks are and how they operate.</em></p> <p><em>In this second part on learning how to build a neural network, we will swoop into the implementation of a flexible library in JavaScript. In specimen you missed it, here is <a href="/mind-how-to-build-a-neural-network">Part One</a>, which goes over what neural networks are and how they operate.</em></p> <h2 id="building-the-mind">Building the Mind</h2> <p>Building a well-constructed neural network library requires increasingly than just understanding forward and when propagation. We moreover need to think well-nigh how a user of the network will want to configure it (e.g. set total number of learning iterations) and other API-level diamond considerations.</p> <p>To simplify our subtitle of neural networks via code, the lawmaking snippets unelevated build a neural network, <code>Mind</code>, with a single subconscious layer. The very <a href="https://github.com/stevenmiller888/mind">Mind</a> library, however, provides the flexibility to build a network with multiple subconscious layers.</p> <h3 id="initialization">Initialization</h3> <p>First, we need to set up our constructor function. Let’s requite the option to use the sigmoid vivification or the hyperbolic tangent vivification function. Additionally, we’ll indulge our users to set the learning rate, number of iterations, and number of units in the subconscious layer, while providing sane defaults for each. Here’s our constructor:</p> <pre><code class="lang-javascript">function Mind(opts) { if (!(this instanceof Mind)) return new Mind(opts); opts = opts || {}; opts.activator === &#39;sigmoid&#39; ? (this.activate = sigmoid, this.activatePrime = sigmoidPrime) : (this.activate = htan, this.activatePrime = htanPrime); // hyperparameters this.learningRate = opts.learningRate || 0.7; this.iterations = opts.iterations || 10000; this.hiddenUnits = opts.hiddenUnits || 3; } </code></pre> <blockquote> <p>Note that here we use the <a href="https://www.npmjs.com/package/sigmoid"><code>sigmoid</code></a>, <a href="https://www.npmjs.com/package/sigmoid-prime"><code>sigmoid-prime</code></a>, <a href="https://www.npmjs.com/package/htan"><code>htan</code></a>, and <a href="https://www.npmjs.com/package/htan-prime"><code>htan-prime</code></a> npm modules.</p> </blockquote> <h3 id="forward-propagation">Forward Propagation</h3> <p>The forward propagation process is a series of sum products and transformations. Let’s summate the first subconscious sum with all four input data:</p> <p><img src="http://i.imgur.com/ZhO0Nj2.png" alt=""></p> <p>This can be represented as such:</p> <p><img src="http://i.imgur.com/XcSZgTk.png" alt=""></p> <p>To get the result from the sum, we wield the vivification function, sigmoid, to each element:</p> <p><img src="http://i.imgur.com/rhnNQZW.png" alt=""></p> <p>Then, we do this then with the subconscious result as the new input to get to the final output result. The unshortened forward propagation lawmaking looks like:</p> <pre><code class="lang-javascript">Mind.prototype.forward = function(examples) { var vivify = this.activate; var weights = this.weights; var ret = {}; ret.hiddenSum = multiply(weights.inputHidden, examples.input); ret.hiddenResult = ret.hiddenSum.transform(activate); ret.outputSum = multiply(weights.hiddenOutput, ret.hiddenResult); ret.outputResult = ret.outputSum.transform(activate); return ret; }; </code></pre> <blockquote> <p>Note that <code>this.activate</code> and <code>this.weights</code> are set at the initialization of a new <code>Mind</code> via <a href="https://github.com/stevenmiller888/mind/blob/master/lib/index.js#L40">passing an <code>opts</code> object</a>. <code>multiply</code> and <code>transform</code> come from an npm <a href="https://www.npmjs.com/package/node-matrix">module</a> for performing vital matrix operations.</p> </blockquote> <h3 id="back-propagation">Back Propagation</h3> <p>Back propagation is a bit increasingly complicated. Let’s squint at the last layer first. We summate the <code>output error</code> (same equation as before):</p> <p><img src="http://i.imgur.com/IAddjWL.png" alt=""></p> <p>And the equivalent in code:</p> <pre><code class="lang-javascript">var errorOutputLayer = subtract(examples.output, results.outputResult); </code></pre> <p>Then, we determine the transpiration in the output layer sum, or <code>delta output sum</code>:</p> <p><img src="http://i.imgur.com/4qnVb6S.png" alt=""></p> <p>And the code:</p> <pre><code class="lang-javascript">var deltaOutputLayer = dot(results.outputSum.transform(activatePrime), errorOutputLayer); </code></pre> <p>Then, we icon out the subconscious output changes. We use this formula:</p> <p><img src="http://i.imgur.com/TR7FS2S.png" alt=""></p> <p>Here is the code:</p> <pre><code class="lang-javascript">var hiddenOutputChanges = scalar(multiply(deltaOutputLayer, results.hiddenResult.transpose()), learningRate); </code></pre> <p>Note that we scale the transpiration by a magnitude, <code>learningRate</code>, which is from 0 to 1. The learning rate applies a greater or lesser portion of the respective welding to the old weight. If there is a large variability in the input (there is little relationship among the training data) and the rate was set high, then the network may not learn well or at all. Setting the rate too upper moreover introduces the risk of <a href="https://en.wikipedia.org/wiki/Overfitting">‘overfitting’</a>, or training the network to generate a relationship from noise instead of the very underlying function.</p> <p>Since we’re dealing with matrices, we handle the semester by multiplying the <code>delta output sum</code> with the subconscious results matrices’ transpose.</p> <p>Then, we do this process <a href="https://github.com/stevenmiller888/mind/blob/master/lib/index.js#L200">again</a> for the input to subconscious layer.</p> <p>The lawmaking for the when propagation function is below. Note that we’re passing what is returned by the <code>forward</code> function as the second argument:</p> <pre><code class="lang-javascript">Mind.prototype.back = function(examples, results) { var activatePrime = this.activatePrime; var learningRate = this.learningRate; var weights = this.weights; // compute weight adjustments var errorOutputLayer = subtract(examples.output, results.outputResult); var deltaOutputLayer = dot(results.outputSum.transform(activatePrime), errorOutputLayer); var hiddenOutputChanges = scalar(multiply(deltaOutputLayer, results.hiddenResult.transpose()), learningRate); var deltaHiddenLayer = dot(multiply(weights.hiddenOutput.transpose(), deltaOutputLayer), results.hiddenSum.transform(activatePrime)); var inputHiddenChanges = scalar(multiply(deltaHiddenLayer, examples.input.transpose()), learningRate); // retread weights weights.inputHidden = add(weights.inputHidden, inputHiddenChanges); weights.hiddenOutput = add(weights.hiddenOutput, hiddenOutputChanges); return errorOutputLayer; }; </code></pre> <blockquote> <p>Note that <code>subtract</code>, <code>dot</code> , <code>scalar</code>, <code>multiply</code>, and <code>add</code> come from the same npm <a href="https://www.npmjs.com/package/node-matrix">module</a> we used surpassing for performing matrix operations.</p> </blockquote> <h3 id="putting-both-together">Putting both together</h3> <p>Now that we have both the forward and when propagation, we can pinpoint the function <code>learn</code> that will put them together. The <code>learn</code> function will winnow training data (<code>examples</code>) as an variety of matrices. Then, we assign random samples to the initial weights (via <a href="https://github.com/stevenmiller888/sample"><code>sample</code></a>). Lastly, we use a <code>for</code> loop and repeat <code>this.iterations</code> to do both forward and wrong-side-up propagation.</p> <pre><code class="lang-javascript">Mind.prototype.learn = function(examples) { examples = normalize(examples); this.weights = { inputHidden: Matrix({ columns: this.hiddenUnits, rows: examples.input[0].length, values: sample }), hiddenOutput: Matrix({ columns: examples.output[0].length, rows: this.hiddenUnits, values: sample }) }; for (var i = 0; i &lt; this.iterations; i++) { var results = this.forward(examples); var errors = this.back(examples, results); } return this; }; </code></pre> <p><em>More information well-nigh the Mind API <a href="https://github.com/stevenmiller888/mind">here</a>.</em></p> <p>Now you have a vital understanding of how neural networks operate, how to train them, and moreover how to build your own!</p> <p>If you have any questions or comments, don’t hesitate to find me on <a href="https://www.twitter.com/stevenmiller888">twitter</a>. Shout out to <a href="https://www.twitter.com/andyjiang">Andy</a> for his help on reviewing this.</p> <h2 id="additional-resources">Additional Resources</h2> <p><a href="https://www.youtube.com/watch?v=bxe2T-V8XRs">Neural Networks Demystified</a>, by <a href="https://www.twitter.com/stephencwelch">Stephen Welch</a></p> <p><a href="http://neuralnetworksanddeeplearning.com/chap3.html">Neural Networks and Deep Learning</a>, by <a href="http://michaelnielsen.org/">Michael Nielsen</a></p> <p><a href="http://natureofcode.com/book/chapter-10-neural-networks/">The Nature of Code, Neural Networks</a>, by <a href="https://twitter.com/shiffman">Daniel Shiffman</a></p> <p><a href="https://en.wikipedia.org/wiki/Artificial_neural_network">Artificial Neural Networks</a>, Wikipedia</p> <p><a href="http://www.cheshireeng.com/Neuralyst/nnbg.htm">Basic Concepts for Neural Networks</a>, by Ross Berteig</p> <p><a href="http://www.saedsayad.com/artificial_neural_network.htm">Artificial Neural Networks</a>, by <a href="http://www.saedsayad.com/author.htm">Saed Sayad</a></p> <p><a href="http://www.researchgate.net/post/How_to_decide_the_number_of_hidden_layers_and_nodes_in_a_hidden_layer">How to Decide the Number ofSubconsciousLayers and Nodes in aSubconsciousLayer</a></p> <p><a href="http://in.mathworks.com/matlabcentral/answers/72654-how-to-decide-size-of-neural-network-like-number-of-neurons-in-a-hidden-layer-number-of-hidden-lay">How to Decide size of Neural Network like number of neurons in a subconscious layer &amp; Number of subconscious layers?</a></p> Mind: How to Build a Neural Network (Part One) http://stevenmiller888.github.com/mind-how-to-build-a-neural-network 2015-08-11T00:00:00.000Z Steven Miller <p><a href="https://en.wikipedia.org/wiki/Artificial_neural_network">Artificial neural networks</a> are statistical learning models, inspired by biological neural networks (central nervous systems, such as the brain), that are used in <a href="https://en.wikipedia.org/wiki/List_of_machine_learning_concepts">machine learning</a>. These networks are represented as systems of interconnected “neurons”, which send messages to each other. The connections within the network can be systematically adjusted based on inputs and outputs, making them platonic for supervised learning.</p> <p><a href="https://en.wikipedia.org/wiki/Artificial_neural_network">Artificial neural networks</a> are statistical learning models, inspired by biological neural networks (central nervous systems, such as the brain), that are used in <a href="https://en.wikipedia.org/wiki/List_of_machine_learning_concepts">machine learning</a>. These networks are represented as systems of interconnected “neurons”, which send messages to each other. The connections within the network can be systematically adjusted based on inputs and outputs, making them platonic for supervised learning.</p> <p>Neural networks can be intimidating, expressly for people with little wits in machine learning and cognitive science! However, through code, this tutorial will explain how neural networks operate. By the end, you will know how to build your own flexible, learning network, similar to <a href="https://www.github.com/stevenmiller888/mind">Mind</a>.</p> <p>The only prerequisites are having a vital understanding of JavaScript, high-school Calculus, and simple matrix operations. Other than that, you don’t need to know anything. Have fun!</p> <h2 id="understanding-the-mind">Understanding the Mind</h2> <p>A neural network is a hodgepodge of “neurons” with “synapses” connecting them. The hodgepodge is organized into three main parts: the input layer, the subconscious layer, and the output layer. Note that you can have <em>n</em> subconscious layers, with the term “deep” learning implying multiple subconscious layers.</p> <p><img src="https://cldup.com/ytEwlOfrRZ-2000x2000.png" alt=""></p> <p><em>Screenshot taken from <a href="https://www.youtube.com/watch?v=bxe2T-V8XRs">this unconfined introductory video</a>, which trains a neural network to predict a test score based on hours spent studying and sleeping the night before.</em></p> <p>Hidden layers are necessary when the neural network has to make sense of something really complicated, contextual, or non obvious, like image recognition. The term “deep” learning came from having many subconscious layers. These layers are known as “hidden”, since they are not visible as a network output. Read increasingly well-nigh subconscious layers <a href="http://stats.stackexchange.com/questions/63152/what-does-the-hidden-layer-in-a-neural-network-compute">here</a> and <a href="http://www.cs.cmu.edu/~dst/pubs/byte-hiddenlayer-1989.pdf">here</a>.</p> <p>The circles represent neurons and lines represent synapses. Synapses take the input and multiply it by a “weight” (the “strength” of the input in determining the output). Neurons add the outputs from all synapses and wield an vivification function.</p> <p>Training a neural network basically ways calibrating all of the “weights” by repeating two key steps, forward propagation and when propagation.</p> <p>Since neural networks are unconfined for regression, the weightier input data are numbers (as opposed to discrete values, like colors or movie genres, whose data is largest for statistical nomenclature models). The output data will be a number within a range like 0 and 1 (this ultimately depends on the vivification function—more on this below).</p> <p>In <strong>forward propagation</strong>, we wield a set of weights to the input data and summate an output. For the first forward propagation, the set of weights is selected randomly.</p> <p>In <strong>back propagation</strong>, we measure the margin of error of the output and retread the weights therefrom to subtract the error.</p> <p>Neural networks repeat both forward and when propagation until the weights are calibrated to virtuously predict an output.</p> <p>Next, we’ll walk through a simple example of training a neural network to function as an <a href="https://en.wikipedia.org/wiki/Exclusive_or">“Exclusive or” (“XOR”) operation</a> to illustrate each step in the training process.</p> <h3 id="forward-propagation">Forward Propagation</h3> <p><em>Note that all calculations will show figures truncated to the thousandths place.</em></p> <p>The XOR function can be represented by the mapping of the unelevated inputs and outputs, which we’ll use as training data. It should provide a correct output given any input winning by the XOR function.</p> <pre><code>input | output -------------- 0, 0 | 0 0, 1 | 1 1, 0 | 1 1, 1 | 0 </code></pre><p>Let’s use the last row from the whilom table, <code>(1, 1) =&gt; 0</code>, to demonstrate forward propagation:</p> <p><img src="http://imgur.com/aTFz1Az.png" alt=""></p> <p><em>Note that we use a single subconscious layer with only three neurons for this example.</em></p> <p>We now assign weights to all of the synapses. Note that these weights are selected randomly (based on Gaussian distribution) since it is the first time we’re forward propagating. The initial weights will be between 0 and 1, but note that the final weights don’t need to be.</p> <p><img src="http://imgur.com/Su6Y4UC.png" alt=""></p> <p>We sum the product of the inputs with their respective set of weights to victorious at the first values for the subconscious layer. You can think of the weights as measures of influence the input nodes have on the output.</p> <pre><code>1 * 0.8 + 1 * 0.2 = 1 1 * 0.4 + 1 * 0.9 = 1.3 1 * 0.3 + 1 * 0.5 = 0.8 </code></pre><p>We put these sums smaller in the circle, considering they’re not the final value:</p> <p><img src="http://imgur.com/gTvxRwo.png" alt=""></p> <p>To get the final value, we wield the <a href="https://en.wikipedia.org/wiki/Activation_function">activation function</a> to the subconscious layer sums. The purpose of the vivification function is to transform the input signal into an output signal and are necessary for neural networks to model ramified non-linear patterns that simpler models might miss.</p> <p>There are many types of vivification functions—linear, sigmoid, hyperbolic tangent, plane step-wise. To be honest, I don’t know why one function is largest than another.</p> <p><img src="https://cldup.com/hxmGABAI7Y.png" alt=""></p> <p><em>Table taken from <a href="http://www.asprs.org/a/publications/pers/2003journal/november/2003_nov_1225-1234.pdf">this paper</a>.</em></p> <p>For our example, let’s use the <a href="https://en.wikipedia.org/wiki/Sigmoid_function">sigmoid function</a> for activation. The sigmoid function looks like this, graphically:</p> <p><img src="http://i.imgur.com/RVbqJsg.jpg" alt=""></p> <p>And applying S(x) to the three subconscious layer <em>sums</em>, we get:</p> <pre><code>S(1.0) = 0.73105857863 S(1.3) = 0.78583498304 S(0.8) = 0.68997448112 </code></pre><p>We add that to our neural network as subconscious layer <em>results</em>:</p> <p><img src="http://imgur.com/yE88Ryt.png" alt=""></p> <p>Then, we sum the product of the subconscious layer results with the second set of weights (also unswayable at random the first time around) to determine the output sum.</p> <pre><code>0.73 * 0.3 + 0.79 * 0.5 + 0.69 * 0.9 = 1.235 </code></pre><p>..finally we wield the vivification function to get the final output result.</p> <pre><code>S(1.235) = 0.7746924929149283 </code></pre><p>This is our full diagram:</p> <p><img src="http://imgur.com/IDFRq5a.png" alt=""></p> <p>Because we used a random set of initial weights, the value of the output neuron is off the mark; in this specimen by +0.77 (since the target is 0). If we stopped here, this set of weights would be a unconfined neural network for inaccurately representing the XOR operation.</p> <p>Let’s fix that by using when propagation to retread the weights to modernize the network!</p> <h3 id="back-propagation">Back Propagation</h3> <p>To modernize our model, we first have to quantify just how wrong our predictions are. Then, we retread the weights therefrom so that the margin of errors are decreased.</p> <p>Similar to forward propagation, when propagation calculations occur at each “layer”. We uncork by waffly the weights between the subconscious layer and the output layer.</p> <p><img src="http://imgur.com/kEyDCJ8.png" alt=""></p> <p>Calculating the incremental transpiration to these weights happens in two steps: 1) we find the margin of error of the output result (what we get without applying the vivification function) to when out the necessary transpiration in the output sum (we undeniability this <code>delta output sum</code>) and 2) we pericope the transpiration in weights by multiplying <code>delta output sum</code> by the subconscious layer results.</p> <p>The <code>output sum margin of error</code> is the target output result minus the calculated output result:</p> <p><img src="http://i.imgur.com/IAddjWL.png" alt=""></p> <p>And doing the math:</p> <pre><code>Target = 0 Calculated = 0.77 Target - calculated = -0.77 </code></pre><p>To summate the necessary transpiration in the output sum, or <code>delta output sum</code>, we take the derivative of the vivification function and wield it to the output sum. In our example, the vivification function is the sigmoid function.</p> <p>To refresh your memory, the vivification function, sigmoid, takes the sum and returns the result:</p> <p><img src="http://i.imgur.com/rKHEE51.png" alt=""></p> <p>So the derivative of sigmoid, moreover known as sigmoid prime, will requite us the rate of transpiration (or “slope”) of the vivification function at the output sum:</p> <p><img src="http://i.imgur.com/8xQ6TiU.png" alt=""></p> <p>Since the <code>output sum margin of error</code> is the difference in the result, we can simply multiply that with the rate of transpiration to requite us the <code>delta output sum</code>:</p> <p><img src="http://i.imgur.com/4qnVb6S.png" alt=""></p> <p>Conceptually, this ways that the transpiration in the output sum is the same as the sigmoid prime of the output result. Doing the very math, we get:</p> <pre><code>Delta output sum = S&#39;(sum) * (output sum margin of error) Delta output sum = S&#39;(1.235) * (-0.77) Delta output sum = -0.13439890643886018 </code></pre><p>Here is a graph of the Sigmoid function to requite you an idea of how we are using the derivative to move the input towards the right direction. Note that this graph is not to scale.</p> <p><img src="http://i.imgur.com/ByyQIJ8.png" alt=""></p> <p>Now that we have the proposed transpiration in the output layer sum (-0.13), let’s use this in the derivative of the output sum function to determine the new transpiration in weights.</p> <p>As a reminder, the mathematical definition of the <code>output sum</code> is the product of the subconscious layer result and the weights between the subconscious and output layer:</p> <p><img src="http://i.imgur.com/ITudruR.png" alt=""></p> <p>The derivative of the <code>output sum</code> is:</p> <p><img src="http://i.imgur.com/57mJyOe.png" alt=""></p> <p>..which can moreover be represented as:</p> <p><img src="http://i.imgur.com/TR7FS2S.png" alt=""></p> <p>This relationship suggests that a greater transpiration in output sum yields a greater transpiration in the weights; input neurons with the biggest contribution (higher weight to output neuron) should wits increasingly transpiration in the connecting synapse.</p> <p>Let’s do the math:</p> <pre><code>hidden result 1 = 0.73105857863 subconscious result 2 = 0.78583498304 subconscious result 3 = 0.68997448112 Delta weights = delta output sum / subconscious layer results Delta weights = -0.1344 / [0.73105, 0.78583, 0.69997] Delta weights = [-0.1838, -0.1710, -0.1920] old w7 = 0.3 old w8 = 0.5 old w9 = 0.9 new w7 = 0.1162 new w8 = 0.329 new w9 = 0.708 </code></pre><p>To determine the transpiration in the weights between the <em>input and hidden</em> layers, we perform the similar, but notably different, set of calculations. Note that in the pursuit calculations, we use the initial weights instead of the recently adjusted weights from the first part of the wrong-side-up propagation.</p> <p>Remember that the relationship between the subconscious result, the weights between the subconscious and output layer, and the output sum is:</p> <p><img src="http://i.imgur.com/ITudruR.png" alt=""></p> <p>Instead of deriving for <code>output sum</code>, let’s derive for <code>hidden result</code> as a function of <code>output sum</code> to ultimately find out <code>delta subconscious sum</code>:</p> <p><img src="http://i.imgur.com/25TS8NU.png" alt=""> <img src="http://i.imgur.com/iQIR1MD.png" alt=""></p> <p>Also, remember that the transpiration in the <code>hidden result</code> can moreover be specified as:</p> <p><img src="http://i.imgur.com/ZquX1pv.png" alt=""></p> <p>Let’s multiply both sides by sigmoid prime of the subconscious sum:</p> <p><img src="http://i.imgur.com/X0wvirh.png" alt=""> <img src="http://i.imgur.com/msHbhQl.png" alt=""></p> <p>All of the pieces in the whilom equation can be calculated, so we can determine the <code>delta subconscious sum</code>:</p> <pre><code>Delta subconscious sum = delta output sum / hidden-to-outer weights * S&#39;(hidden sum) Delta subconscious sum = -0.1344 / [0.3, 0.5, 0.9] * S&#39;([1, 1.3, 0.8]) Delta subconscious sum = [-0.448, -0.2688, -0.1493] * [0.1966, 0.1683, 0.2139] Delta subconscious sum = [-0.088, -0.0452, -0.0319] </code></pre><p>Once we get the <code>delta subconscious sum</code>, we summate the transpiration in weights between the input and subconscious layer by dividing it with the input data, <code>(1, 1)</code>. The input data here is equivalent to the <code>hidden results</code> in the older when propagation process to determine the transpiration in the hidden-to-output weights. Here is the derivation of that relationship, similar to the one before:</p> <p><img src="http://i.imgur.com/7NmXWSh.png" alt=""> <img src="http://i.imgur.com/1SDxECJ.png" alt=""> <img src="http://i.imgur.com/KYuSAgw.png" alt=""></p> <p>Let’s do the math:</p> <pre><code>input 1 = 1 input 2 = 1 Delta weights = delta subconscious sum / input data Delta weights = [-0.088, -0.0452, -0.0319] / [1, 1] Delta weights = [-0.088, -0.0452, -0.0319, -0.088, -0.0452, -0.0319] old w1 = 0.8 old w2 = 0.4 old w3 = 0.3 old w4 = 0.2 old w5 = 0.9 old w6 = 0.5 new w1 = 0.712 new w2 = 0.3548 new w3 = 0.2681 new w4 = 0.112 new w5 = 0.8548 new w6 = 0.4681 </code></pre><p>Here are the new weights, right next to the initial random starting weights as comparison:</p> <pre><code>old new ----------------- w1: 0.8 w1: 0.712 w2: 0.4 w2: 0.3548 w3: 0.3 w3: 0.2681 w4: 0.2 w4: 0.112 w5: 0.9 w5: 0.8548 w6: 0.5 w6: 0.4681 w7: 0.3 w7: 0.1162 w8: 0.5 w8: 0.329 w9: 0.9 w9: 0.708 </code></pre><p>Once we victorious at the adjusted weights, we start then with forward propagation. When training a neural network, it is worldwide to repeat both these processes thousands of times (by default, Mind iterates 10,000 times).</p> <p>And doing a quick forward propagation, we can see that the final output here is a little closer to the expected output:</p> <p><img src="http://i.imgur.com/UNlffE1.png" alt=""></p> <p>Through just one iteration of forward and when propagation, we’ve once improved the network!!</p> <p><em>Check out <a href="https://www.youtube.com/watch?v=GlcnxUlrtek">this short video</a> for a unconfined subtitle of identifying global minima in a forfeit function as a way to determine necessary weight changes.</em></p> <p>If you enjoyed learning well-nigh how neural networks work, trammels out <a href="/mind-how-to-build-a-neural-network-part-2">Part Two</a> of this post to learn how to build your own neural network.</p> <p><strong>Note: I’ve been working on a new project tabbed <a href="https://maji.cloud/products/config">Maji Config</a>. If you’re tired of duplicating config all over your codebase, or having to redeploy all your apps whenever you need to transpiration config, this might work well for you. I’d love to hear what you think of it. Feel self-ruling to send me an <a href="mailto:stevenmiller888@me.com?Subject=Hello">email</a>.</strong></p> Remembering `.shift()` and `.unshift()` http://stevenmiller888.github.com/remembering-shift-vs-unshift 2015-05-22T00:00:00.000Z Steven Miller <p>If you’re like me, you forget the difference between <code>.shift()</code> and <code>.unshift()</code> all the time. Here’s a little trick to remembering them. Picture a keyboard. Now think of that keyboard as an array, with the left side of the keyboard respective to the front of the array, and the right side of the keyboard respective to the when of the array. Imagine yourself pressing lanugo the left <code>shift</code> key. Think of this as “removing” it from the keyboard (array). Similarly, the <code>shift</code> function removes an element from the front of the array. Now picture yourself removing your finger from the left <code>shift</code> key, and it comes when up. You just “added” (or unshifted) an element to the array.</p> <p>If you’re like me, you forget the difference between <code>.shift()</code> and <code>.unshift()</code> all the time. Here’s a little trick to remembering them. Picture a keyboard. Now think of that keyboard as an array, with the left side of the keyboard respective to the front of the array, and the right side of the keyboard respective to the when of the array. Imagine yourself pressing lanugo the left <code>shift</code> key. Think of this as “removing” it from the keyboard (array). Similarly, the <code>shift</code> function removes an element from the front of the array. Now picture yourself removing your finger from the left <code>shift</code> key, and it comes when up. You just “added” (or unshifted) an element to the array.</p>