I gave a talk last night at the Berlin machine learning meetup on learning graph embeddings in hyperbolic space, featuring the recent NIPS 2017 paper of Nickel & Kiela. Covered are:
- An illustration of why the Euclidean plane is not a good place to embed trees (since circle circumference grows only linearly in the radius);
- Extending this same argument to higher dimensional Euclidean space;
- An introduction to the hyperbolic plane and the Poincaré disc model;
- A discussion of Rik Sarkar’s result that trees embed with arbitrarily small error in the hyperbolic plane;
- A demonstration that, in the hyperbolic plane, circle circumference is exponential in the radius (better written here);
- A review of the results of Nickel & Kiela on the (transitive closure of the) WordNet hypernymy graph;
- Some thoughts on the gradient optimisation (perhaps better written here).
And here are the slides!
Nickel & Kiela had a great paper on embedding graphs in hyperbolic space at NIPS 2017. They work with the Poincaré ball model of hyperbolic space. This is just the interior of the unit ball, equipped with an appropriate Riemannian metric. This metric is conformal, meaning that the inner product on the tangent spaces on the Poincare ball differ from that of the (Euclidean) ambient space by only a scalar factor. This means that the hyperbolic gradient at a point can be obtained from the Euclidean gradient at that same point just by rescaling. That is, you pretend for a moment that your objective function is defined in Euclidean space, calculate the gradient as usual, and just rescale. This scaling factor depends on the Euclidean distance of from the origin, as depicted below:
So far, so good. What the authors then do is simply add the (rescaled) gradient to obtain the new value of the parameter vector, which is fine, if you only take a small step, and so long as you don’t accidentally step over the boundary of the Poincaré disc! A friend described this as the Buzz Lightyear update (“to infinity, and beyond!”). While adding the gradient vector seems to work fine in practice, it does seem rather brutal. The root of the “problem” (if we agree to call it one) is that we aren’t following the geodesics of the manifold – to perform an update, we should really been applying the exponential map at that current point to the gradient vector. Geodesics on the Poincaré disc look like this:
that is, they are sections of circles that intersect the boundary of the Poincaré disc at right angles, or diameters (the latter being a limiting case of the former). With that in mind, here’s a picture showing how the Buzz Lightyear update on the Poincaré disc could be sub-optimal:
The blue vector is the hyperbolic gradient vector that is added to , taking us out of the Poincaré disc. The resulting vector is then pulled back (along the ray with the faintly-marked origin) until it is within the disc by some small margin, resulting in the new value of the parameter vector . On the other hand, if you followed the geodesic from to which the gradient vector is tangent, you’d end up at the end of the red curve. Which is quite some distance away.
We show that if the contour lines of a function are symmetric with respect to some rotation or reflection, then so is the evolution of gradient descent when minimising that function. Rotation of the space on which the function is evaluated effects a corresponding rotation of each of the points visited under gradient descent (similarly, for reflections).
This ultimately comes down to showing the following: if is the differentiable function being minimised and is a rotation or reflection that preserves the contours of , then
for all points .
We consider below three one-dimensional examples that demonstrate that, even if the function is symmetric with respect to all orthogonal transformations, it is necessary that the transformation be orthogonal in order for the property (1) above to hold.