Skip to content
This repository has been archived by the owner on Apr 18, 2023. It is now read-only.

Idea: Transformed Leafs #73

Open
wesselb opened this issue Oct 25, 2017 · 0 comments
Open

Idea: Transformed Leafs #73

wesselb opened this issue Oct 25, 2017 · 0 comments

Comments

@wesselb
Copy link
Contributor

wesselb commented Oct 25, 2017

It may happen that one wants to optimise a variable x over the positive reals. Naively performing gradient descent won't work, because the variable may go negative. A commonly-used workaround is to instead optimise log(x); that is, parametrise the domain of the variable using some invertible transformf such that f_inverse(x) is unconstrained:

x = 1.0

x_log = Leaf(Tape(), log(x))
x_ = exp(x_log)
y = f(x_)

x′ = exp(step((y), x_log, optimiser))

This, however, is rather clumsy to write. I therefore propose a type TransformedLeaf, which implements the above approach, but hides transforming the variable back and forth:

type TransformedLeaf{T} <: Node{T}
    val::T
    tape::Tape
    pos::Int
    inverse::Node{T}
    f::Function
end

function TransformedLeaf(tape::Tape, x, f::Function, f_inv::Function)
    before = Leaf(tape, f_inv(x))
    after = f(before)
    return TransformedLeaf(after.val, after.tape, after.pos, before, f)
end
positive(tape::Tape, x) = TransformedLeaf(tape, x, x -> exp.(x), x -> log.(x))

step(t::Tape, x::Node, opt::Optimiser) = x.val + step(update!(opt, t[x]))
step(t::Tape, x::TransformedLeaf, opt::Optimiser) =
    x.f(x.inverse.val + step(update!(opt, t[x.inverse])))

Now, one can simply

x = 1.0

x_ = positive(x)
y = f(x_)

x′ = step((y), x, optimiser)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant