Publications

LambdaGo: A Functional Extension of the Go Programming Language

The main idea of this project is to create a functional programming language that transcompiles into Go.
LambdaGo code:
1# Fibonacci sequence
2
3fib::int->[int].
4fib n = [0] ++ (fib2 0 1 (n-1)).
5
6fib2::int->int->int->[int].
7fib2 a b n  | n > 0 = [b] ++ (fib2 b (a+b) (n-1))
8            | otherwise = <[int]> [].
Resulting Go code:
1// Fib::int->[int]
2func Fib (arg0 int) []int{
3	return concat_7a637([]int{0})(Fib2(0)(1)(sub_e2fbf(arg0)(1)))
4}
5
6// Fib2::int->int->int->[int]
7func Fib2 (arg0 int) func(int) func(int) []int{
8	return func(arg1 int) func(int) []int{
9		return func(arg2 int) []int{
10			if gt_fd2cf(arg2)(0) {
11				return concat_7a637([]int{arg1})(Fib2(arg1)(add_e2fbf(arg0)(arg1))(sub_e2fbf(arg2)(1)))
12			}
13			return type_cast_36e98([]interface{}{})
14		}
15	}
16}
More details can be found on the project page

Theses

Rotation, Translation and Scale Invariant Graph Neural Networks (Master's thesis)

  • Created GNN's that have the same output no matter how an input is rotated, translated or scaled
  • Previous methods only implemented rotation and translation invariance, first to also include scale invariance
  • Link to pdf
Condensed version of the solution:
\(M_i^l = max(\{\|s_i^l-s_j^l\|^2, \forall j \in N_i\})\)\(m_{ij}^l = \phi_e\left(h_i^l, h_j^l, \frac{\|s_i^l-s_j^l\|^2}{\max(M_i^l, M_j^l)}\right)\)\(s_i^{l+1} = s_i^l+\frac{1}{card(N_i)}\sum_{j \in N_i} (s_i^l - s_j^l)\phi_s(m_{ij}^l)\)\(m_i^l = \sum m_{ij}^l\)\(h_i^{l+1} = \phi_n(h_i^l, m_i^l)\)
The main idea is to use the normalized node distances instead of the actual positions of the nodes to perform the graph convolution.
This works because the node distances do not change if:
  • The input is translated by a certain amount
  • The input is rotated around the origin
The node distances are normalized by dividing the length of each edge by the maximum edge length within 1 hop.
By doing this, any possible scaling constant is reduced by the division.
Only considering 1-hop neighborhoods is better than global normalization because it is more robust to noise and outliers (only nodes within 1 hop are affected by a noise/outlier node).
More details can be found in the text of the thesis.