Sunday, February 25, 2018

Pi Zero W Security Cameras Part 1: Hub and Camera



I am completely against IP cameras that send video out to the cloud and then back to the end user.  I am also against cloud storage that I cannot set up myself or evaluate the security on.  If you want to get hot and heavy into file encryption, I will eventually add another post about how to encrypt the files on public clouds.  I tend to error on the side of ACLs rather than encryption in the cloud because the security footage should also be remotely accessible to the owner.  The path I intend on using for the camera footage is pi zero w --> motion detection server --> my cloud storage.  The added benefit of this setup is that using an RTSP feed from the camera to the motion detection server allows the feed to be viewed on other devices, such as an Amazon Echo Show or a Fire TV.  Some security will need to be added and skills will need to be configured to allow the Amazon connectivity.

This will be broken into a few posts, since we need to establish the camera hub and cameras prior to building out the skills.  This project will require a Raspberry Pi 3 to be used as the hub, and we will connect it over ethernet rather than wifi.  This way multiple feeds can be collected, schedules can be set, and all motion can be sent from a central location to cloud storage.  Another feature that can be added is a microphone.  For the new and expecting parents out there, you should look at the LIFX+ light bulb with IR.  With the NoIR cameras for the raspberry pi and a microphone, you can build the equivalent of a high dollar baby monitor system for about 100$, and each additional viewing angle in the room is only going to be the cost of a pi zero kit, camera, and microphone.

Part 1: Hub and Camera Setup

The hub software is MotionEye OS.  It is built for single board computers, and works exceptionally well.  The types of tuning and tweaking are minimal, but you will want to be able to see the DHCP logs of your router to connect after install.  I am not using a camera on the hub, so all processing power is being used to render the network camera feeds.  For each network camera, I am using Raspbian Stretch lite. A little warning about the cameras, there are no cases for the cameras with IR lights included and attached.  For any specialty camera that you decide to buy you should build or buy a case, or deal with having a system that looks like a small pile of e-waste. Most pi zero cases have room for the heat sink or regular pi camera, but not both.

Small Pile of E-waste

Hub BOM

1. Raspberry Pi 3 - $35
2. Power supply - $7.50
3. 32GB  High speed micro SD card - $20
4. Pi Case - $2.00

You can get this as a kit for $70 on Amazon with a couple of bonus parts.  An HDMI cable, USB Micro SD reader/writer, and heat sinks.  Since this is a headless server, the HDMI is completely unnecessary.

Camera BOM

1. Raspberry Pi Zero W - $10
2. Power supply - $7.50
3. Case - $6
4. Micro SD card - $10
5. Camera - $15

The SD card speed is not a very high priority for the cameras, since they will be streaming live data.  The top 4 items can usually be purchased as a kit for a few dollars less.

Build the Hub

The software for the hub has a very simple process to install and configure.  Use your favorite software to get the OS image onto the microSD, install it into the pi 3, and boot it up.  We are using hard-wired ethernet, so it doesn't matter where it goes in your house, as long as it is plugged into power and network.

Aggregate 1
MotionEye OS

Once the system is up, check the DHCP tables of your router to see what address was leased to the device.  You can also dig around on the client list for a motioneye device.  Connect through the browser and log in. There should not be much to configure until we build the camera.

Build the Camera

Depending on the case purchased, most of the pieces should snap together fairly easily.  I ended up having to break mine down because the top of the case was curved.  This led me to believe that the camera was not properly secured, and I pushed it until the ribbon connector directly below the camera disconnected.  It led to some fun with checking the pi to camera ribbon about 20 times with a shutdown every 5 minutes to retest.  Keep that little connector in mind if you need to troubleshoot.

Check This Connector

This is a typical raspbian install.  Add the wpa_supplicant file, touch ssh on the boot partition, go through the raspi-config to change hostname and enable the camera module.  To start software things off, update the OS and reboot if necessary.  Next up is where things got a little dated for the main blog I used for my build.  I work with RHEL professionally, so the whole "install from source using sudo" thing is a little strange to me.  You will want to install Live555, and once that is installed, the RTSP server can be installed, and the service configured.  The funny thing is that the RTSP install guide references and downloads files from the same location as the Live555 guide.

Aggregate 2
Live555 install instructions are near the bottom

Aggregate 3
V4L2RTSPServer Install and configuration

Once installed, you may want to change some things around.  For example, the service should be in /lib/systemd/system/ with a symlink at /etc/systemd/system.  I tend to install extra software in /opt (because I'm not a savage), the service will need to point to the correct location of the v4lrtspserver binary.  If you installed in /opt, that will be /opt/v4l2rtspserver/v4l2rtspserver.  You can also modify the service to work a little better and faster, but the limitations on a pi zero can dictate a few of the changes.  I've noticed that changing the framerate (-F at line 11) to 20 is decent, and works well on VLC for testing.  I also tend to be a little realistic about what I'm trying to do with the camera, and change the resolution to a lower format.  You can probably stream a 30 frame 1080p video, but I've found 20 frame 1280x768 is considerably faster to load.  Once you have finished with the service config file, start it up with: "sudo systemctl start v4l2rtspserver" and verify that it works with: "sudo systemctl status v4l2rtspserver" .  Enable the service on boot with: "sudo systemctl enable v4l2rtspserver.service" .

Camera in case with SD card for scale

Add the Camera to the Hub

Testing the camera prior to adding it to the hub can be helpful.  If you have a computer with VLC installed, open a network stream to rtsp://camera-IP- address:8554/unicast and verify that you get video.  A quick warning: stop the feed and end the VLC application prior to setting up the camera on the hub.  The port can get locked up on certain builds of the VLC application.

Open the Hub web console and use the drop down menu to add a camera.  Select Network Camera in the menu and add the same rtsp address from the VLC test.  There should not be any username or password requirement at this stage.  Finish adding the camera and verify video is displaying.  From within the General Settings menu, turn on advanced settings.  This will allow you to change the Timezone and Hostname for the Hub.  Expect a refresh or reboot every time you want to save changes.

You can disable whichever services you do not need.  I don't really have any Windows systems, so I can shut down Samba without too much concern.  The OS is fairly customized, so SSH isn't incredibly helpful either.  In the Video Device settings, change the Resolution and Frame Rate to match what you entered into the rtsp service.  The stream should speed up once you apply these settings.  Further in the menu you can set up the retention of files, the schedule for the cameras, and the file storage (including cloud and custom locations).  The options inside of File Storage are the most interesting, since you can have the hub message you if it captures motion and save images/video to a cloud location.  These are the high dollar features on most consumer grade security cameras.

Multiple Cameras

For a quick setup and configuration of multiple cameras, you just have to configure one camera.  Once you have built your working model, clone the image.  I keep a backup of my completed image locally and on my cloud storage.  Once you rip it to a new microSD card, just run raspi-config on the new camera to give it a different hostname.  Don't forget to change the wpa configuration to a passphrase when you are done.

Aggregate 4
Backup and Restore Pi Image

Aggregate 5
WPA Passphrase Instructions

The next steps after all cameras are configured, fine tuned, and operational is to add TLS so you can reach them remotely if you choose to.  Adding TLS to the stream and configuring the skills to view the feed from Amazon FireTV and Echo Show will be covered in the next blog, Pi Zero W Security Cameras Part 2: TLS and Amazon Skills

Saturday, February 17, 2018

Why Amazon instead of Google?

I couldn't resist posting something about the breaking news.  VMware, one of my favorite companies, just lost another long term employee to Google.  The funny thing is that the tech blogs and articles make some pretty good points but don't seem to follow through with their thoughts.  They point out that Google isn't as far reaching as Amazon with some of their cloud services.  Yes, that is correct.  But it really becomes more interesting when you actually use the products involved in the articles.  For example, VMware has free Hands-On-Labs (HOL) that allow anyone to gain experience with their products.  The products also range in price from free to pretty damned expensive.  The products scale well: from a laptop to a home lab, to a small office, to an enterprise, to cloud hybrid, to cloud.  And you can probably build something on the laptop product that can be deployed into the cloud.  Some of their enterprise orchestration tools have even been used as a method of home automation.

This type of scalability is an area that Google is not the best at.  There was a recent news report about Apple using Google for its own iCloud platform.  That's because Google is really good at large things that appeal to the masses.  Where they are lacking is the home/hobby developer.  They don't really scale well with the way that things are done by amateur developers.  My simple explanation for that is when an amateur (like myself) is developing, they tend to base it on their own needs and environment.  When a professional is developing, they tend to make sure it is as universal as possible.  VMware cracked that code with the range of software and scalability, while providing a HOL environment to learn how to use their products to fit your own needs.

The reason why Amazon is doing so well in the home automation field is because they offer a customizable solution on their enterprise tools.  You can build a one size fits one, or a one size fits all.  That's why I use their cloud services.  I don't think I've seen the equivalent of Amazon Lambda on Google, although it should be there.  Amazon beat everyone to the punch with the Echo, and we are now watching companies play catch-up.  Apple only releases polished products, so they have effectively taken themselves out of the running as a serious competitor.  Microsoft has their niche that pretty much revolves around the Xbox for most users.  Which leaves Google as the only serious competition to Amazon. 

Here's the gamble: Can Google create a user friendly interface for amateurs to create their own skills and tools before they lose too much market share to Amazon.  With the new employee from VMware, maybe.  And this obviously is not a winner take all scenario.  Apple has had a popularity cycle of ups and downs that spans decades.  But the longer it takes for Google to make their cloud development platform accessible to the guys like me, the harder it will be for them to get back on top.  They offer some of the best services already, with YouTube, GMail, Android, Play, and a ton of others.  It would be nice if they could take care of the hobby developer.

Raspberry Pi Remote Control

One of the ideal outcomes of new technology is advancing automation.  Setting a schedule for a device to follow and establishing triggers to execute a skill.  The problem I was having was twofold.  First, the schedule on my television was not executing properly.  Second, I have children in the house, so I can never find the television remotes.  The game of "where is the remote" started to get too difficult for me, since the hiding spots included the refrigerator as well as the pantry.  So, my living room television got a Harmony hub.  Then the problem spread to my bedroom.

I took a look at my actual usage of the Harmony hub and realized that even if I could justify the price, multiple hubs may not play nice on the same network.  It was also around this time that the Raspberry Pi Zero Wireless device became widely available.  I had seen some absolutely ugly implementations of a smart IR blaster online, and decided that I could probably perform the same tasks with a better looking build.  I decided against the full sized pi computer due to price, but the instructions should pretty much be the same if you decide to use one.  

Since this is Aggregated Knowledge, I'll list the links that helped me create a rough draft, my version, and my recommendations for a better version.  I will also include my own troubleshooting that should help explain how and where to customize the code to your own specifications with an explanation why certain things will or will not work.  First up, the BOM and required tools.

1. Pi Zero W -- $10
2. Power supply for Pi -- $5
3. 40 pin GPIO add-on -- $3
4. Micro SD Card -- $6
5. Pi Case -- $3
6. IR Blaster module for Arduino -- $3
7. Wire for GPIO to Blaster module -- $1

Prices are a rough estimate.  Items 1-5 can be purchased as a kit on Amazon for ~$25.

Next up, tools.  Everything goes together fairly easily without tools.  But, to get that 40 pin GPIO on, you will need a soldering iron and solder (unless you buy the solderless pins, which are a bit pricey).  My recommendation on soldering in the pins is to break away the 3 pins you will need and solder those on individually.  The reasoning is that you may want to bend them at a 90 degree angle to hide the wires inside of the case if you are not cutting custom wires, or to prevent clutter inside the case.  It makes the project look nicer.  If you plan on soldering on the whole 40 pin module, this is an excellent example on how to perform the soldering quickly.

Aggregate 1: 


I love this example because it's not using anything more than a flat surface and soldering tools.  No custom stands or even a set of helping hands, because you don't need them.  If you are completely new to soldering, YouTube user EEVblog is the equivalent of a motivational speaker combined with an engineering instructor.  

Installing and configuring the OS.  At the time of this writing, Stretch is the latest release and what I have used.  This will lead to changes in scripts and code, since the packages available and their locations have changed a little as well.  For example, we will be using node.js which was previously at a different location and defined as node rather than nodejs.  Keep this in mind with the additional aggregates that I link, and with your own project.  The typical method of writing the OS image to microSD should be followed, then add the wpa_supplicant file.  Upon reboot, for security purposes, we will set up a wpa passphrase.      

Aggregate 2:


Now it's time to make you open a half dozen tabs on your browser.  The next aggregate is the primary building blocks for the web service portion of the project.  There is some very good information in it, but it is also a little bit dated and written for controlling lights and fans.  An important message here: If you need to capture your remote control codes, this aggregate has the instructions.  I have a Samsung TV, which has the same IR codes for power/channel/sound on most of the newer remotes.  I was able to download my lirc.conf file from the online database without issue.  I would recommend reading through Aggregate 3 without executing anything until reading through the changes detailed in this blog.

Aggregate 3:

Aggregate 4:

From the example in Aggregate 3 (Agg3), Step number 3 becomes less convoluted if you have the lircd.conf file for your specific remote.  If you plan on changing the name of your device across the config files, you will need to search for instances of "living_light" and replace them with the new name.  Locally, this will be a requirement in /etc/lirc/lircd.conf and /opt/alexa_home_control/raspberrypi/iot_shadow.js.  I did not change the living_light portion of the scripts, code, or config files.  In order to properly execute the setup script, change /opt/alexa_home_control/raspberrypi/service/alexa_home.service file to properly execute nodejs.  The /usr/bin/node entry should match the location of nodejs.

In Step 4, the IoT setup has changed a little, but not too much.  It is relatively simple to setup a device and add the shadow file from Agg3.  You may want to set up your Amazon IAM environment first, so you will have 2 factor authentication prior to building anything on AWS.  Just make sure everything is in the same AWS region when you build it out. Make sure you have the certificates where they need to be according to Agg3, but modify the iot_shadow.js to include the new variable for shadow name and add the host endpoint.  You can remove the lines for clientId and region.  The additions will be as follows:

var shadowName = "Philbert"
host: "enter in your https endpoint here"

My IoT device name is Philbert, which will offer a lesson when we cover the Voice Service.  The host endpoint is from the Interact page of your IoT device, under HTTPS.  Once these changes are made, execute the setup script on the Pi.

A new role in IAM should help smooth out the Lambda configuration.  I gave the role full permissions to IoT and Write permission to CloudWatch Logs.  Permissions can be scaled back once you have a working device.  I can not guarantee that the permissions listed in Agg3 are correct.  The Lambda configuration is where things can go sideways fairly quickly.  CloudWatch logs will help you in Troubleshooting.  Using the linked Lambda in Agg3, I created my own file that after a considerable amount of learning and troubleshooting finally worked.  This code should execute with minimal effort on a Python 2.7 Lambda.

Aggregate 5:
My Lambda

Line 7 will need a correct Voice Service ID, will be added in next step.
Line 9 requires that same HTTPS endpoint from the IoT management page.
Line 11 and 30 are just the parts (us-southcentral-2) from the endpoint that match Line 9.

Looking through the Lambda, you can see what can be added rather quickly.  You have intents, a way to map them to a command, and the code that validates the intent.  I got stuck for a few hours on the bottom of the script with lines 129 and 136, reviewing every line of every config file and script.  The logs helped me figure it out, but it was frustrating since I have absolutely no training or education in Python or AWS.  If you have changed anything, the intents and utterances should be changed in Voice Services.  Collect your Lambda arn number and open the Alexa Developer Console in a new window.

Most of the ASK/Alexa/Voice Service configuration is pasting in existing information and modifying entries.  As previously noted, the name of my project is Philbert.  But voice to text reads it as Filbert.  So, the Invocation Name of your skill should be equal to what you see in a text to speech app.  There's a bug in my Intent Schema, when an Echo hears "turn up", "turn down", "turn volume up", or "turn volume down" it will change the volume on the Alexa device rather than issue the command to the IoT device.  Changing the Intent and Lambda should correct this, I just don't want to have to say "Alexa, push it to the right" to get my television to turn the volume up. 

Aggregate 6:
Intent Schema

Aggregate 7:
Utterances 


The Configuration tab should allow you to enter in the Lambda ARN, and then you can test and save the skill.  Once the ARN is entered, verify that the skill ID from the Skill Information tab has been entered into the Lambda.  Perform a quick test on the Skill Test tab "alexa, ask filbert to turn on", and see if it goes through.  If not, look at the Lambda Monitoring tab.  If there were no errors on the Monitoring tab, test the connectivity to the AWS IoT device by watching the activity log while executing the test again.  If the test executes properly, SSH into the pi and verify that the alexa_home service is running.  If it is, attempt to restart it and see if it registers on the activity log for the AWS IoT device. 

Once the device works, it is time to determine where you would like it.  Some double sided tape can mount the pi behind a television while the IR blaster can be mounted under the IR receiver on the television bezel, with the bulb extending out enough to register.  The other option is 3D printing a case, or using colorful tape and some screws to hide wire and stabilize the IR blaster.  Once the blaster is stabilized and the device looks pretty, you can test the maximum working distance from the TV.

Thanks for reading through this, coming soon will be "Why Amazon Web Services for IoT" and "Pi Zero W Security Cameras to the Echo Show"




  

3d design for printing

I don't want to sound like an idiot.  I really don't.  I just lack the patience to learn Blender.  It's not just because the nam...