Teach Neural Networks to identify sequences of values

First Things First

Article from Issue 206/2018
Author(s):

2, 5, 7, 10, 12 – and what number comes next? Mike Schilli tests whether intelligence tests devised by psychologists can be cracked with modern AI Networks.

Neural networks do great things when it comes to detecting patterns in noisy input data and assigning unambiguous results to them. If a dozen people with different handwriting enter the letters A or B in a form, a trained network can identify with almost 100 percent certainty what they wrote. Or consider pattern recognition systems for identifying the license plates of passing vehicles: Aren't these technical miracles? They extract the digits from a camera feed so that the Department of Transportation knows exactly who is going where.

Once a neural network is done learning, it always assigns the same result to the same input data, but when it comes to tasks that need to determine the next value in time-discrete value sequences, neural networks often fail to deliver perfect results, especially if the input signal is subject to variations of unknown periodicity.

In a neural network, the learning algorithm adjusts internal weights based on the training data. However, once these weights are determined, they won't change anymore at run time and thus cannot account for temporal changes in the input data, because the machine doesn't remember any previous state. Recurrent neural networks (RNNs) maintain internal connections back to the input, and thus a result can influence the next input vector, but this does not help a simple network identify temporal patterns that extend over several cycles.

[...]

Use Express-Checkout link below to read the full article (PDF).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Neural Networks

    3, 4, 8, 11… ? A neural network can complete this series without knowledge of the underlying algorithm – by a kind of virtual gut feeling. We’ll show you how neural networks solve problems by simulating the behavior of a human brain.

  • Neural networks learn from mistakes and remember successes

    The well-known Monty Hall game show problem can be a rewarding maiden voyage for prospective statisticians. But is it possible to teach a neural network to choose between goats and cars with a few practice sessions?

  • Spam-Detecting Neural Network

    Build a neural network that uncovers spam websites.

  • Natural Language Processing

    If an actor's lip movements don't match the spoken text in a dubbed movie, it not only stresses people who are hard of hearing, but it can also make things difficult for everyone. AI can help solve this problem with lip-sync translations of movie scripts.

  • Programming Snapshot – Mileage AI

    On the basis of training data in the form of daily car mileage, Mike Schilli's AI program tries to identify patterns in driving behavior and make forecasts.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News