Neural Lambda Machines and Software 2.0

In 1956, the field of modern Artificial Intelligence (AI) was born. In the early days of AI, top researchers tried to develop rule-based systems that computer programs could follow to mimic human behavior. We discovered that this rule-based approach in its naïve form was hopeless in creating AI. Eventually, our collective approach to AI was revolutionized with Deep Learning (DL). Today, many systems marketed as AI, especially those in the field of computer vision, use DL to some degree. 

In the last decade, the tech world has seen an explosion of DL applications. From smart object tracking, an AI that could beat the world champion at Go, to generating talking paintings, we have fundamentally redefined what is possible with AI-based software.Despite these successes, we have not yet successfully applied DL to learning tasks which are fundamentally algorithmic. Specifically, we want a DL framework to learn to program, without the need for a human to write a single line of code. 

In 2016, Alex Graves at Google DeepMind invented the Neural Turing Machine (NTM). The NTM was the first known DL architecture that was “Turing Complete”, meaning the NTM was theoretically capable of learning to program anything a human software engineer is capable of. While a theoretical success, the NTM architecture fell short of being practical. In 2018, the successor to the NTM, the Differential Neural Computer (DNC) (also invented by Alex Graeves) was able to solve concrete programming problems. Even the DNC was limited in its capabilities and has yet to see widespread commercial application. 


Neural Turing Machine Definition | DeepAI
Neural Turing Machine
Neural Turing Machine



Differentiable Neural Computers – Towards Data Science


What if it was possible and practical to input a data structure into some DL framework, and it would output the correct transformation of that data structure, as if a human software engineer wrote code to do so? What if we took it a step further and took that learned transformation and converted it into source code in a given programming language? Neurales aims to achieve this with our proprietary “Neural Lambda Machine” (NLM). 

The NTM and the DNC were both inspired from treating a computer program as a state that is transformed to the desired target. The NLM is inspired from Lambda Calculus, a mathematical theory which abstracts a computer program as a black box that takes in input and produces a desired output. Both the Turing Machine and Lambda Calculus are mathematically equivalent, but the NLM is designed to be both theoretically sound and with the end user in mind. 

In terms of practical advantages, the NLM is capable of learning from relatively few examples, and also has a mechanism for learning not just some algorithm, but a fast one. The module is incentivized through a custom loss function to encourage the learned algorithm to be efficient, in the sense that it takes less time to transform input to output. How is this achieved? 

During training, the NLM’s proprietary attention mechanism weights are specialized to reroute the intermediate activation maps to the output rather than skipping through the network if the “confidence” goes past a certain threshold. As an example, rather than going through layers 1-6, the NLM will go from layers 1-3 and then to 6, bypassing 4 and 5 if the “confidence” mechanism is above a threshold. This means the NLM will first learn some algorithm, and then during training, may find a “faster route” to the solution. This translates to less run time when the learned program is run using unseen data. In future updates, we aim to have the NLM estimate time and/or space complexity of a learned algorithm. 

The NLM is already capable of learning a sorting algorithm, computing a Fast Fourier Transform in specialized cases, and more. While not yet able to handle general data structures like graphs, hash tables and others, or generate source code from a learned Abstract Syntax Tree (AST), Neurales aims to have this feature available as soon as possible. Those who pre-order Neurales will be eligible for a discount! Below, we link to a video of the NLM learning to sort an array of integers in real time. 

More videos will be released soon showing the NLM learning the FFT, along with a side by side comparison of a FFT and a standard DFT to show the NLM learned a fast implementation. 

NLM Results –

NLM Presentation –

SERRI Technologies, LLC doing business as Neruales (“Neurales”) reserves its intellectual property rights in and to this document, and its proprietary technology, including copyright, trademark rights, industrial design rights, and patent rights. Other marks used in this document are the property of their respective owners. Neurales does not grant any license, assignment, or other grant of interest in or to the copyright of this document or any referenced documents, Neurales logo, any other marks used in this document, or any other intellectual property rights used or referred to herein, except as Neurales may expressly provide in a written agreement. This document describes Neurales’ plans at the time of publication regarding a future product. The design, features, functionality, and release date of the future product are subject to change. Nothing in this document is a representation, guarantee, or warranty of any aspect of the future product upon being made available by Neurales or thereafter. 

Further reading

Comments are closed.