Teach Neural Networks to identify sequences of values
First Things First
2, 5, 7, 10, 12 – and what number comes next? Mike Schilli tests whether intelligence tests devised by psychologists can be cracked with modern AI Networks.
Neural networks do great things when it comes to detecting patterns in noisy input data and assigning unambiguous results to them. If a dozen people with different handwriting enter the letters A or B in a form, a trained network can identify with almost 100 percent certainty what they wrote. Or consider pattern recognition systems for identifying the license plates of passing vehicles: Aren't these technical miracles? They extract the digits from a camera feed so that the Department of Transportation knows exactly who is going where.
Once a neural network is done learning, it always assigns the same result to the same input data, but when it comes to tasks that need to determine the next value in time-discrete value sequences, neural networks often fail to deliver perfect results, especially if the input signal is subject to variations of unknown periodicity.
In a neural network, the learning algorithm adjusts internal weights based on the training data. However, once these weights are determined, they won't change anymore at run time and thus cannot account for temporal changes in the input data, because the machine doesn't remember any previous state. Recurrent neural networks (RNNs) maintain internal connections back to the input, and thus a result can influence the next input vector, but this does not help a simple network identify temporal patterns that extend over several cycles.
At the Psychologist's
A somewhat entertaining example of predicting sequences are intelligence tests (Figure 1) performed by psychologists, where the candidate is asked to determine the number that comes next in a numeric sequence. Any school kid can tell that 2, 4, 6 is followed by 8, but what comes next after the sequence 2, 5, 7, 10, 12?
Figure 2 shows two learning steps and a test step for a Long Short-Term Memory (LSTM) network that I want to teach which number comes after 12. In the first learning step, in the first row of the matrix, it learns that the combination 2,
5,
7 is always followed by 10. The second row assigns a result of 12 to the subsequence 5,
7,
10, therefore examining a window shifted by one step. The LSTM network uses this training data to adjust the parameters of its internal cells (Figure 3).
Unlike the neural network, not every input value produces an output value; instead, the LSTM keeps track of the current state in a hidden memory cell (Figure 4). It is only after receiving the third snippet of input and evaluating the carried over memory state that an output value (y(1)) is produced.
Reshaping Matrixes
To implement the LSTM network, Listing 1 [2] uses the Python keras
library [3]. Because many of its functions expect data in the form of matrixes of varying shapes and sizes, it makes sense to run a quick tutorial of the reshape()
function exported by the NumPy array library first. A one-dimensional NumPy array (i.e., a vector) is converted by reshape()
, as shown in Figure 5, to matrixes of predefined dimensions.
The first parameter passed to reshape()
is the number of elements in the first dimension, followed by the number in the second, and so on. Because the number of elements is implicitly determined by the number of remaining elements after defining deeper dimensions, the former is often stated as -1
. Then, the library fills the matrix with what's left over.
Called with just one parameter (reshape(-1)
), the method converts a nested array structure back into a one-dimensional vector.
Counting Games
Called with an array like [3,4,5,6,7]
(Figure 6), the script in Listing 1 relatively accurately produces the next sequence number (7.84 instead of 8). Listing 1 breaks down the series of numbers with the window
function defined in line 12 into sliding windows of four ([3,4,5,6]
, [4,5,6,7]
); it stores the first three elements in the input vector X
and the last element in the result vector y
.
Listing 1
iq
To prevent the LSTM network's internal weight adjustments from going haywire, the StandardScaler
from the sklearn
library normalizes the original input values to small positive and negative floating-point numbers around zero for both the input and the result vector, the latter containing the anticipated correct results required for supervised learning.
The fit_transform()
method then applies the scaling procedure and standardizes the data. Later, before it comes to printing the results, inverse_transform()
turns the tables and maps the data back to the original scale for an edifying inspection.
Lines 38 to 45 stack the individual layers of the LSTM network on top of one another. First, the LSTM core layer is added in lines 39 and 40, with five internal neurons. It is followed by the connected output layer of type Dense
and the Activation
function, which sets the response curve of the neurons used internally to linear
, because this achieved the best results in testing.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.