Set up Amazon Web Services – Part 2

Home Run into the Cloud

Article from Issue 197/2017
Author(s):

DIY Python scripts run in container environments on Amazon's Lambda service – this snapshot example deploys an AI program for motion analysis in video surveillance recordings.

After some initial steps in a previous article [1] to set up an AWS account, an S3 storage server with a static web server, and the first Lambda function, I'll now show you how to set up an API server on Amazon to track down interesting scenes in videos from a surveillance camera.

The Lambda function triggered either by a web request from the browser or a command-line tool like curl retrieves a video from the web, runs it through an artificial intelligence (AI) algorithm implemented by the OpenCV library, generates a motion profile, and returns the URL of a contact sheet generated as a JPEG with all the interesting movements from the recording (Figures 1 and 2).

Figure 1: The AI program for motion analysis runs on Amazon servers behind a REST API.
Figure 2: The contact sheet produced on AWS displays the seconds in the surveillance video during which something actually moved.

Sandbox Games

Unlike Amazon's EC2 instances with their full-blooded (albeit virtual) Linux servers, the Lambda Service [2] provides only a containerized environment. Inside a container, Node.js, Python, or Java programs run in a sandbox, which Amazon pushes around at will between physical servers, eventually going as far as putting the container to sleep in case of inactivity – just to conjure it up again when next accessed. Leaving data on the virtual disk of the container and hoping to find it still there next time would thus result in an unstable application. Instead, Lambda functions communicate with AWS offerings such as S3 storage or the Dynamo database to secure data and are otherwise "stateless."

Developers can upload things that an application cannot describe in a Python script to the (as rumor has it) CentOS-based containers as ZIP files (Figure 3).

Figure 3: Uploading code in a ZIP file to the Lambda server via an Amazon S3 bucket.

A Lambda function that uses artificial intelligence capabilities from the OpenCV library, like the example, needs to compile the required binaries or libraries up front in a Unix environment similar to the Lambda container, package and upload the results, and call it with the Python script at run time. Existing Python bindings to shared libraries are used here, or the Python script calls precompiled binaries as external processes.

Lean and Mean

To prevent the AI program [3] from using too much compute time after installation in the Amazon cloud – and thus also using up money after exceeding the "free tier" quota – the improved code [4] (updated in Listing 1 from the previous article) no longer looks for movements in every frame (i.e., 50 times a second) but hops through the movie in increments of half a second in line 99. After a frame with detected motion, line 96 even skips forward two seconds. To accomplish this, vid.grab() called in line 50 no longer painstakingly decodes the frame in a complex process, as vid.read() did previously, but discards it to retrieve the next one.

Listing 1

max-movement-lk.cpp

 

Whereas the first version [3] only printed the number of seconds into the video in which the algorithm detected motion, to subsequently use MPlayer to extract the frames as JPEG files, lines 92 to 94 now use the imwrite() image processing functions included with OpenCV to write detected frames immediately as 000x.jpg to the virtual disk. A second run and the shenanigans for installing MPlayer in the Lambda container are thus no longer required.

Based on these JPEG images, another Python script, mk-montage.py, then produces a contact sheet, also in .jpg format, with the help of the ImageMagick library. The Lambda program puts this file into Amazon's S3 cloud storage, and then sends a link to the file to the calling client.

RAM Is Money

How does a Python programmer now pick up a document from the web? A first approach would be the read() method provided by urlopen(), which then sends all the bytes it has obtained to a local file through write(). But, this would mean that a potentially large video file would be completely read into memory before Python finally starts writing it to disk.

The ample supply of RAM needed for this costs money on Amazon. To avoid this, the urlretrieve() method from the urllib module used in Listing 2 can buffer smaller data chunks – in a hopefully more or less intelligent way.

Listing 2

vimo.py

 

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Perl – Video Preview

    Rather than stare at boring surveillance videos, in which nothing happens 90 percent of the time, Mike Schilli tries the OpenCV image recognition software, which automatically extracts the most exciting action sequences.

  • Ren'Py

    Ren'Py helps you create Android, Linux, macOS, Windows, and HTML5 games and apps.

  • Python 3

    What do Python 2.x programmers need to know about Python 3?

  • Manim

    Manim lets you program video sequences with a few lines of Python code to present mathematical problems in an engaging and scientifically accurate way.

  • Out-of-Bounds Photos

    Including out-of-bounds effects in slide shows and presentations is bound to get the undivided attention of the audience. Gimp has simple tools to create these image effects.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News