Skip to content
This repository has been archived by the owner on Apr 18, 2023. It is now read-only.

Commit

Permalink
build based on bcf9b8b
Browse files Browse the repository at this point in the history
  • Loading branch information
Documenter.jl committed Apr 14, 2023
1 parent 0da9b4c commit 149bc7b
Show file tree
Hide file tree
Showing 5 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion dev/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,4 @@
end</code></pre><p><code>@unionise</code> simply changes the method signature to allow each argument to accept the union of the types specified and <code>Nabla.jl</code>&#39;s internal <code>Node</code> type. This will have no impact on the performance of your code when arguments of the types specified in the definition are provided, so you can safely <code>@unionise</code> code without worrying about potential performance implications.</p><h2 id="Low-Level-Interface"><a class="docs-heading-anchor" href="#Low-Level-Interface">Low-Level Interface</a><a id="Low-Level-Interface-1"></a><a class="docs-heading-anchor-permalink" href="#Low-Level-Interface" title="Permalink"></a></h2><p>We now use <code>Nabla.jl</code>&#39;s low-level interface to take the gradient of <code>f</code> w.r.t. <code>x</code> and <code>y</code> at the values of <code>x</code> and <code>y</code> generated above. We first place <code>x</code> and <code>y</code> into a <code>Leaf</code> container. This enables these variables to be traced by <code>Nabla.jl</code>. This can be achieved by first creating a <code>Tape</code> object, onto which all computations involving <code>x</code> and <code>y</code> are recorded, as follows:</p><pre><code class="language-julia hljs">tape = Tape()
x_ = Leaf(tape, x)
y_ = Leaf(tape, y)</code></pre><p>which can be achieved more concisely using Julia&#39;s broadcasting capabilities:</p><pre><code class="language- hljs">x_, y_ = Leaf.(Tape(), (x, y))</code></pre><p>Note that it is critical that <code>x_</code> and <code>y_</code> are constructed using the same <code>Tape</code> instance. Currently, <code>Nabla.jl</code> will fail silently if this is not the case. We then simply pass <code>x_</code> and <code>y_</code> to <code>f</code> instead of <code>x</code> and <code>y</code>:</p><pre><code class="language- hljs">z_ = f(x_, y_)</code></pre><p>We can compute the gradients of <code>z_</code> w.r.t. <code>x_</code> and <code>y_</code> using <code></code>, and access them by indexing the output with <code>x_</code> and <code>y_</code>:</p><pre><code class="language- hljs">∇z = ∇(z_)
(∇x, ∇y) = (∇z[x_], ∇z[y_])</code></pre><h2 id="Gotchas-and-Best-Practice"><a class="docs-heading-anchor" href="#Gotchas-and-Best-Practice">Gotchas and Best Practice</a><a id="Gotchas-and-Best-Practice-1"></a><a class="docs-heading-anchor-permalink" href="#Gotchas-and-Best-Practice" title="Permalink"></a></h2><ul><li><code>Nabla.jl</code> does not currently have complete coverage of the entire standard library due to finite resources and competing priorities. Particularly notable omissions are the subtypes of <code>Factorization</code> objects and all in-place functions. These are both issues which will be resolved in the future.</li><li>The usual RMAD gotcha applies: due to the need to record each of the operations performed in the execution of a function for use in efficient gradient computation, the memory requirement of a programme scales approximately linearly in the length of the programme. Although, due to our use of a dynamically constructed computation graph, we support all forms of control flow, long <code>for</code> / <code>while</code> loops should be performed with care, so as to avoid running out of memory.</li><li>In a similar vein, develop a (strong) preference for higher-order functions and linear algebra over for-loops; <code>Nabla.jl</code> has optimisations targetting Julia&#39;s higher-order functions (<code>broadcast</code>, <code>mapreduce</code> and friends), and consequently loop-fusion / &quot;dot-syntax&quot;, and linear algebra operations which should be made use of where possible.</li></ul></article><nav class="docs-footer"><a class="docs-footer-nextpage" href="pages/api.html">API »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 0.27.24 on <span class="colophon-date" title="Thursday 13 April 2023 11:21">Thursday 13 April 2023</span>. Using Julia version 1.6.7.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
(∇x, ∇y) = (∇z[x_], ∇z[y_])</code></pre><h2 id="Gotchas-and-Best-Practice"><a class="docs-heading-anchor" href="#Gotchas-and-Best-Practice">Gotchas and Best Practice</a><a id="Gotchas-and-Best-Practice-1"></a><a class="docs-heading-anchor-permalink" href="#Gotchas-and-Best-Practice" title="Permalink"></a></h2><ul><li><code>Nabla.jl</code> does not currently have complete coverage of the entire standard library due to finite resources and competing priorities. Particularly notable omissions are the subtypes of <code>Factorization</code> objects and all in-place functions. These are both issues which will be resolved in the future.</li><li>The usual RMAD gotcha applies: due to the need to record each of the operations performed in the execution of a function for use in efficient gradient computation, the memory requirement of a programme scales approximately linearly in the length of the programme. Although, due to our use of a dynamically constructed computation graph, we support all forms of control flow, long <code>for</code> / <code>while</code> loops should be performed with care, so as to avoid running out of memory.</li><li>In a similar vein, develop a (strong) preference for higher-order functions and linear algebra over for-loops; <code>Nabla.jl</code> has optimisations targetting Julia&#39;s higher-order functions (<code>broadcast</code>, <code>mapreduce</code> and friends), and consequently loop-fusion / &quot;dot-syntax&quot;, and linear algebra operations which should be made use of where possible.</li></ul></article><nav class="docs-footer"><a class="docs-footer-nextpage" href="pages/api.html">API »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 0.27.24 on <span class="colophon-date" title="Friday 14 April 2023 02:19">Friday 14 April 2023</span>. Using Julia version 1.6.7.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

0 comments on commit 149bc7b

Please sign in to comment.