Teach Neural Networks to identify sequences of values
First Things First
2, 5, 7, 10, 12 – and what number comes next? Mike Schilli tests whether intelligence tests devised by psychologists can be cracked with modern AI Networks.
Neural networks do great things when it comes to detecting patterns in noisy input data and assigning unambiguous results to them. If a dozen people with different handwriting enter the letters A or B in a form, a trained network can identify with almost 100 percent certainty what they wrote. Or consider pattern recognition systems for identifying the license plates of passing vehicles: Aren't these technical miracles? They extract the digits from a camera feed so that the Department of Transportation knows exactly who is going where.
Once a neural network is done learning, it always assigns the same result to the same input data, but when it comes to tasks that need to determine the next value in time-discrete value sequences, neural networks often fail to deliver perfect results, especially if the input signal is subject to variations of unknown periodicity.
In a neural network, the learning algorithm adjusts internal weights based on the training data. However, once these weights are determined, they won't change anymore at run time and thus cannot account for temporal changes in the input data, because the machine doesn't remember any previous state. Recurrent neural networks (RNNs) maintain internal connections back to the input, and thus a result can influence the next input vector, but this does not help a simple network identify temporal patterns that extend over several cycles.
At the Psychologist's
A somewhat entertaining example of predicting sequences are intelligence tests (Figure 1) performed by psychologists, where the candidate is asked to determine the number that comes next in a numeric sequence. Any school kid can tell that 2, 4, 6 is followed by 8, but what comes next after the sequence 2, 5, 7, 10, 12?
Figure 2 shows two learning steps and a test step for a Long Short-Term Memory (LSTM) network that I want to teach which number comes after 12. In the first learning step, in the first row of the matrix, it learns that the combination 2,
5,
7 is always followed by 10. The second row assigns a result of 12 to the subsequence 5,
7,
10, therefore examining a window shifted by one step. The LSTM network uses this training data to adjust the parameters of its internal cells (Figure 3).
Unlike the neural network, not every input value produces an output value; instead, the LSTM keeps track of the current state in a hidden memory cell (Figure 4). It is only after receiving the third snippet of input and evaluating the carried over memory state that an output value (y(1)) is produced.
Reshaping Matrixes
To implement the LSTM network, Listing 1 [2] uses the Python keras
library [3]. Because many of its functions expect data in the form of matrixes of varying shapes and sizes, it makes sense to run a quick tutorial of the reshape()
function exported by the NumPy array library first. A one-dimensional NumPy array (i.e., a vector) is converted by reshape()
, as shown in Figure 5, to matrixes of predefined dimensions.
The first parameter passed to reshape()
is the number of elements in the first dimension, followed by the number in the second, and so on. Because the number of elements is implicitly determined by the number of remaining elements after defining deeper dimensions, the former is often stated as -1
. Then, the library fills the matrix with what's left over.
Called with just one parameter (reshape(-1)
), the method converts a nested array structure back into a one-dimensional vector.
Counting Games
Called with an array like [3,4,5,6,7]
(Figure 6), the script in Listing 1 relatively accurately produces the next sequence number (7.84 instead of 8). Listing 1 breaks down the series of numbers with the window
function defined in line 12 into sliding windows of four ([3,4,5,6]
, [4,5,6,7]
); it stores the first three elements in the input vector X
and the last element in the result vector y
.
Listing 1
iq
To prevent the LSTM network's internal weight adjustments from going haywire, the StandardScaler
from the sklearn
library normalizes the original input values to small positive and negative floating-point numbers around zero for both the input and the result vector, the latter containing the anticipated correct results required for supervised learning.
The fit_transform()
method then applies the scaling procedure and standardizes the data. Later, before it comes to printing the results, inverse_transform()
turns the tables and maps the data back to the original scale for an edifying inspection.
Lines 38 to 45 stack the individual layers of the LSTM network on top of one another. First, the LSTM core layer is added in lines 39 and 40, with five internal neurons. It is followed by the connected output layer of type Dense
and the Activation
function, which sets the response curve of the neurons used internally to linear
, because this achieved the best results in testing.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.
-
Fedora KDE Approved as an Official Spin
If you prefer the Plasma desktop environment and the Fedora distribution, you're in luck because there's now an official spin that is listed on the same level as the Fedora Workstation edition.
-
New Steam Client Ups the Ante for Linux
The latest release from Steam has some pretty cool tricks up its sleeve.
-
Gnome OS Transitioning Toward a General-Purpose Distro
If you're looking for the perfectly vanilla take on the Gnome desktop, Gnome OS might be for you.
-
Fedora 41 Released with New Features
If you're a Fedora fan or just looking for a Linux distribution to help you migrate from Windows, Fedora 41 might be just the ticket.
-
AlmaLinux OS Kitten 10 Gives Power Users a Sneak Preview
If you're looking to kick the tires of AlmaLinux's upstream version, the developers have a purrfect solution.
-
Gnome 47.1 Released with a Few Fixes
The latest release of the Gnome desktop is all about fixing a few nagging issues and not about bringing new features into the mix.
-
System76 Unveils an Ampere-Powered Thelio Desktop
If you're looking for a new desktop system for developing autonomous driving and software-defined vehicle solutions. System76 has you covered.
-
VirtualBox 7.1.4 Includes Initial Support for Linux kernel 6.12
The latest version of VirtualBox has arrived and it not only adds initial support for kernel 6.12 but another feature that will make using the virtual machine tool much easier.