Friday, March 30, 2018

I hate iptables and so can you

I was told that all power leading to my house was underground and therefore I shouldn't have to worry too much about power outages.  I had just moved from a place that was on the same grid as a Verizon distribution center, so I hadn't had significant outages in a very long time.  Maybe twice a year.  Well, the power goes out at least once a month in my new house.  All of my battery backups need new batteries, and it's one of those things that doesn't jump into the budget without a reminder.  So, here I am with shaky power and a decent home lab, trying to follow some semblance of best practices by not shutting off all of the firewalls.

The fun part about iptables is that they go away when you lose power, unless you took preventative steps when setting them.  As a side note, the other fun part is that you have to back them up/save them every time you change them.  You can get around this by setting up a cron job to perform the backup once a week, it's not like they will fill up your hard drive.  So, lets work on backing up and restoring iptables, so you don't have to worry about it anymore.

To start off, most of the information that is needed for RHEL and Debian based distros was at one website.

Aggregate 1
Save and restore iptables

 As I have mentioned in previous blogs, I use Photon for my container hosts.  That way I don't have to worry quite as much about my physical box, and it runs on nearly any hypervisor.  Not kidding, I have it running in Bhyve hypervisor on my FreeNAS.  Since Photon is a docker host, and docker hosts use a whole lot of bridging, I also got to learn about ebtables.  Ethernet Bridge Tables are like iptables, but they will make you want to live in a hut without electricity rather than strictly implement them.  It is a level of security that should be learned and implemented in high security environments, but I will not dig into them here.  Just don't get confused about the ebtables-config file in /etc/sysconfig on the Photon system, that is not for iptables.

If you are using Photon, the location to save your iptables is /etc/systemd/scripts/ip4save or ip6save if, for some reason, you are using ipv6.  Since the iptables service should be enabled by default, the rules you save should load automagically when the system boots. If you are in a homelab and want to make sure that the tables save at some regular interval (in case the power goes out again), you may want to set up a cron job.  Before anyone can freak out about this, I acknowledge that this is probably a bad security practice in a corporate/high security environment.  The cron job should not back up the iptables to where they execute on boot in those types of environments. 

Prior to setting up a cron job on photon, you will need to install cronie.  "tdnf install cronie"
Now you can set up a cron job.

Aggregate 2
Cron tab quick settings explanation

Aggregate 3
Set a cron task

The shell script that I created was pretty simple.  The entire file looked like this:

#!/bin/bash
/usr/sbin/iptables-save > /etc/systemd/scripts/ip4save

And now I don't have to worry quite as much about iptables vanishing after another power outage!

Sunday, March 25, 2018

Registries and backups for docker containers

There was a recent tech podcast in which one of the guests explained that he was turning his back on certifications.  When asked how he will provide proof of his knowledge, he stated that he would present his portfolio.  That included Github, a blog, and presumably Dockerhub.  This approach is not realistic for everyone.  Sometimes maintaining a cert is a condition of employment.  But, the portfolio will probably start to become a standard practice when going for anything beyond an entry level position in the development world.

The shining star of Information Assurance is usually security, but backups are also a critical part of IA.  In practice, the backups allow for a stronger security posture.  I've had to roll back to a previous image of a virtual machine because some security setting was implemented incorrectly and fried a machine.  Modern devops should be starting with a secure by default image and then loosening security to allow an application to work.  This might mean that you have to roll back to an image that is more secure because loosening a control did not resolve the broken application.

We'll start with backups and move to repositories to create a decent development environment.  I am aware that there are tools that allow you to jump into a shell on a container, modify it, commit those changes, and push it to a repo from a single screen.  That's great if you intend on really going crazy with development.  I'm not going to implement those tools, because they will not be necessary for basic modifications or testing with a low volume of containers. 

From the build your own container blog, we should know how to get into a shell on a container with docker exec.  Once there, you make your modifications and add programs to build a container application.  Now, you've put in the time, but it isn't working exactly the way you want.  Time to hit pause and think about what it might be missing or what needs to be configured for it to work.  You need to commit the changes and you should probably save it. 

Aggregate 1
Commit!     

Commit will allow you to give a nice tag to the container and allow you to spin up that version to keep working.  But, you may want to take it to a new machine (like a laptop) and work on it elsewhere.  That's when you need to save it. 

Aggregate 2
Save! 

And once it is moved to the new host, you will need to load it to continue working.

Aggregate 3
Load!

Yes, all of the aggregates will come from Docker.  But I promise, this is just to explain how it should flow.  What has been described so far is a single person development environment.  Nothing has been pushed to a repo.  For a home user, you push to a private repo for collaboration, and a public repo for actual deployment.  This is also where things branch out.  Either you can figure out how to get the application configured, or you hit a bump.  If you hit a bump and need someone to look in the container, you can either share the tarball for the saved application or you can push it to a private repo and allow access to people that want to collaborate. 

The access controls for making a repository private are dead simple.  There should be a giant "make private" button in one of the menus.  It's not very intuitive to add users for collaboration.  Just remember, development containers stay private.  Production containers need to be cleaned, tested, and evaluated before pushing to a public repo.  The access controls should be locked down completely for a production repo, only one account should be allowed to push to it. 

I prefer using Dockerhub, but there are also instructions for private repositories in the instructions.

Aggregate 4
Login before you push 

I would also recommend using one of the keystores mentioned in the login instructions.  I don't think anyone will be sifting through my bash history, but it's a good practice not to use a password in the CLI.  You then tag and push.  My experience with Dockerhub has been that it is default in these commands, so you do not need to specify the repo in the command.  My login, tag, and push look like this:

docker login -u dockerhubusername -p dockerhubpassword
docker tag local/container:version dockerhubusername/container:version
docker push dockerhubusername/container:version

The instructions listed in the following aggregate are a bit more universal, so they include adding the repo to the commands.

Aggregate 5
Tag and Push

You can then verify by looking at your Dockerhub account and running the docker images command. 

Never forget to clean up your container before pushing it to public.

Saturday, March 24, 2018

Registries and the pain of SSL

I ended up going all out for my local container registry.  I don't recommend it.  This definitely falls into the "I'll see what it takes in case I need to do it at work" category.  A description of what "all out" means for me in this instance was to stand up a local Certificate Authority and then add the CA to each of the nodes in the docker swarm.  All so that I could pull images from the internet and then push them to my local registry. Another terrible idea, since you can set an image to auto update or pull every time you start it.  That means that you do not get updates without pulling them yourself, and you better know how to tag your local pushes.

I also did not want to use LetsEncrypt when I did this.  The idea being that I might have to set this up in a sandbox with absolutely no network communication outside of a rack and a handful of dev consoles.  A few of the best practices were thrown to the wind, considering what kind of environment I was designing.  In the end, both RedHat and VMware came to the rescue with ready to deploy container infrastructure that come with self signed certificates.  So, this will be rather short with a heavy reliance on documentation from other sources.

Aggregate 1
Build a CA

This guy covers multiple methods, and tends to analyze situations fairly well.  As soon as he pointed out that he didn't need a full PKI implementation, I dug around and found what I needed.  Some of the info on these things are like building a kerberos realm from scratch.  You'll also need to review how to create a certificate in that documentation after you read the requirements and instructions on how to add it to the registry container.

Aggregate 2
Certificate on a Registry

This documentation holds the information of what is required, and how the files should be named.  My lesson learned on that, I somehow ended up with a folder full of cert files after an attempt to put SSL on some old VMware product (vCenter 5?).  Documentation was fixed in a later version, but the naming convention of the certificates wasn't really optional for that build.  Be warned, sometimes you need to follow instructions to the letter.

The final piece of the lab was adding the CA to the nodes as a trusted source.  This ended up being simple enough, it just sucked as far as moving things around in a small lab.  I don't have anything like Puppet or Ansible set up, so I had to SSH in to every system like some kind of savage.

Aggregate 3
Adding trusted CA

I don't recommend this exercise, and I think it is a waste of time for anyone with high speed internet access.  It is considerably easier and potentially safer to just use save and export commands if you don't want to push to an existing registry.  For others that are building out a sandboxed lab, consider pre-packaged options. 

Cleaning up the mess in a container

Security is always a big deal with computers.  If you know how to do more than plug in a printer, you've probably had someone ask you to take a look at their computer.  It was probably riddled with malware that more than likely came from online games or some sketchy adult website.  A security conference that I went to many years ago kept bringing up how a browser got hijacked while logged in as a low privilege user, and it led to a full scale compromise of the system.

One of the resources used by security researchers as a guideline to secure their systems is the Secure Technical Implementation Guidelines (STIG).  There is plenty of chatter online about them being laid out in a ridiculous manner, or not going far enough.  That is why they are a resource, not the resource.  I agree with many of the arguments.  I'm not sure if it is still true, but one version had something like 12 checks for what was in the SSH configuration file.  The checks also spanned the category level, which means that instead of fixing all 12 checks in one shot,  if you sorted by category you ended up modifying the file 3 or 4 times to knock out all of the checks.

Aggregate 1
Where to get STIGs

If you work in security, you'll also know not to trust that link.  Look at what it leads you to and either validate it, or go through a trusted resource. 

One of the checks that always bothered me because I am lazy and crave some level of convenience is the removal of compilers.  If I get rid of gcc on my system, I can no longer install some patches or re/compile applications and the kernel.  While that is true, all you have to do is reinstall gcc when needed, and remove when you are finished.  It prevents everyone else that shouldn't be using it from readily having access to it.  The same is true of what we put in containers.  When I created my own Nginx container I had to add make, gcc, and other tools to build from source.  I recently noticed that the thin version of Photon OS does not include tar.  These types of tools are standard on most new systems, because the system is new and will need them for building the applications and applying the first round of patches.  This is not a new security practice either, just an often overlooked one.

Aggregate 2
Check out the date on this question

There should always be a fork of a container when it goes into production.  Keep your development container, with tools installed, tagged as a dev container locally.  Also, keep a list of what all you have added that can be pulled off of the container before it is tagged and pushed as a production ready container.  Never install SSH on a container unless you have some use case that absolutely requires it.  Installing SSH on a container is generally not a necessity even in dev environments, and causes a cascading amount of security issues.  Following those tips will earn you a serious amount of respect from anyone that pulls your container from a registry.  Nobody wants to search for tools that they have to remove from a container. 

There are plenty of resources aimed at containers coming out of the woodwork right now for that sweet security money.  They are not really aimed at home users.  The starting point for a home user/hobbyist should be Linux hardening knowledge, which has it's own resources (like the STIG).  If you want to geek out about security, find your favorite tools and procedures that fit your environment.  Some of the hardening guides go through some pretty intense security practices that are not worthwhile to a home user that is operating within their own firewalled network.  Practices that I would only consider at home if I were using them to learn how to implement higher levels of security in a production environment at work.  So, in essence, be realistic about what kind of threats there are in your environment.

Aggregate 3
Geek out about container security

That link will probably have a pop-up, trying to get that sweet security money.

Wednesday, March 21, 2018

Create Your Own Docker Container

There is usually a problem with finding a container that does exactly what you want it to do.  It's either missing something completely or needs some level of modification in order to work correctly.  My current gripe is with the Nginx container used as the reverse proxy for the security camera project.  It should be able to proxy the Real Time Streaming Protocol (RTSP).  Unfortunately it is missing the codecs and module to allow this capability in Nginx.  The good news is that we can add it ourselves.

I have spent a bit of time tracking down vendors that could teach a "containers from scratch" course.  Mostly because documentation was lacking, or was written for a very specific version of Docker.  This was when Docker docs was still coming up, and basically contained what you could find in the man pages.  The few blogs that contained information that was relevant were assuming some basic level of knowledge that hadn't really been included in their own documentation.  I had a few questions, and now I can provide answers.

My top three questions:

How do you build a base image?  You don't.  It takes too much effort and we are at a point where you can get your image from a pull very fast.  Yes, even for the security minded, you can pull faster than build.  There are verified/certified registries out there with guaranteed clean images.  Vendors like RedHat tend to cost a little bit of money, or the CERN scientific images can be pulled for free.  Although I use Debian, Ubuntu, and BSD at home, I have always worked with RPM based distros professionally.  For the standard home user, Ubuntu has the best documentation and therefore has better troubleshooting options.  Keep this in mind when you pull.

What is a base image and what can I do with it?  A base image is any image you pull.  The OS base images are just environments that operate like an empty operating system for a server.  No GUI, no applications, just a small environment of directories required to run the container.  To make it useful, you will need to connect to it and add applications.  The base image for an application is the OS base image plus the application and whatever is required to run it.  Let's say you pull a MySQL base image.  The dockerfile or registry will tell you what OS image it is running on (Ubuntu, Alpine, CentOS), and then list what else was added so the application can run.  Since you will need to modify the Database for it to work, you will need to connect to it to modify it.  Once all modifications and additions are complete, you commit the image and deploy it.

How do I migrate an image to a new environment?  This is a big one for me.  There are environments where you absolutely cannot pull a docker image.  Or, you can't pull the image that you created off of your preferred registry/account.  So, how do you add a container to these environments?  Start with a system that you can use to pull.  You can then use docker export or docker save to create a tarball and then docker import to recover.  I'm seriously disappointed that they couldn't come up with something clever about "port" in the import/export commands.

While I was getting started with containers, I was under the impression that you would have to:
Build a virtual machine
Install docker
Pull images that have everything you need
Export the virtual machine to a new environment

That's almost completely wrong.

As long as you are running an OS that can run docker, all you need to do is pull an image and modify it locally, or move it and modify it at the new location.  Pulling patches and applications before you move it can cut down on the amount of dependency hell and source code downloads/moves that are required to complete the configuration.

Let's go ahead and review the questions again, fixing Nginx as an example.  Because we need to compile modules into Nginx, we cannot start from the Nginx base image, we have to build from a Base OS.  We start by pulling the Base OS image.  I'm using Ubuntu for this example.


Next, run the container "docker run -ti --name nginxfix ubuntu:latest".  This will start the container, open a bash shell on the container, and allow you to issue commands inside of the container.  You can now pull patches, install applications, and get source code.  For containers with an application already installed, you may just need to start them and perform "docker exec -t -i container_name /bin/bash" to get to a shell.  Base OS images tend to die quickly when you start them, since they don't have any applications to run.

Aggregate 1
Getting a shell on a Base OS container

Aggregate 2
Getting a shell on an application container

The actual package downloads and installs are a little intimidating.  We start with using apt to install required packages:
"apt-get install ffmpeg liblivemedia-dev git wget gcc libpcre3-dev libssl-dev make libaio1 libaio-dev vim"

Next, we pull the source:
"wget www.nginx.org/download/latest_version_tarball".  Unzip and extract the source.  Then, for the RTMP module, "git clone https://github.com/arut/nginx-rtmp-module.git".  Now we look at the instructions to build with the modules.

Aggregate 3
Build Nginx

The actual command to build Nginx so that it is very similar to the official source looks like this:
./configure --add-module=/opt/nginx-rtmp-module --with-http_ssl_module --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

And yes, that is all one command.

We need to create the service.  "vi /lib/systemd/system/nginx.service"

Insert:
[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID

[Install]
WantedBy=multi-user.target


Now enable the service by using "systemctl enable nginx.service
Aggregate 4

It might also be a good idea to compare the /etc/nginx/nginx.conf file from an existing Nginx base container and modify as necessary.  Last, we add the nginx user "useradd -s /bin/false nginx"

Now we can commit and test.  I am concerned about the container collapsing when I log out of it, so I just opened a new terminal and listed running containers, then performed a docker commit.


Because I put a little bit of effort into building this image as close as possible to the official Nginx container, we should be able to review the official Dockerfile to see what kind of settings we need to make it run.  At this point, any errors are going to be configuration errors.  We can now prepare to move it.

You can save the image or export it.  Exporting retains changes made to the container.  Once it is exported, copy it to a disk, and move it to where it needs to go.  The commands are listed in the following aggregate and are much easier to understand after reading through it.

Aggregate 5
Import and Export

Once I have sorted out any configuration changes required in the modified Nginx container, I will replace the existing Nginx reverse proxy that I have and test out the RTSP streaming.   If everything goes well, I will post about uploading to Dockerhub, local registries, and creating a Dockerfile.

Friday, March 9, 2018

Preparing for swarm on Photon

I remember the pre-SELinux on Fedora days.  In later editions it was installed, but it definitely was not enabled by default.  We are in a new age, where systems come considerably more secure by default.  This confused me considerably.  Especially when a few of the suggested fixes did not work, and one of them partially worked.  I was up late wondering why the system was opening port 2375 on an IPv6 interface, but not at all on IPv4.  This was just to expose the docker endpoint to portainer so that I could manage 3 hosts through a single web interface.  The issue I faced was that the methods in the documentation I had used to expose the docker port had changed.

The strange thing is that the old method of doing it worked on one photon instance, but not the other.  Pretty much my entire problem was described in this article, with the resolution.

Aggregate 1:
The docker endpoint port

The only piece missing, which I had to resolve, was the correct IP address in the 10-dockerd.conf file.  It kept defaulting to IPv6 on the host when I used 0.0.0.0:2375.  Changed it to the actual IP address on the system and everything started correctly.  Ran "docker swarm init", followed the instructions to add the workers to the swarm, and now everything survives a reboot without losing connectivity to the swarm.  I did also run "iptables -A INPUT -p tcp --dport 2377 -j ACCEPT" on the manager node, just in case iptables wanted to fight.

Aggregate 2:
Create the swarm

This also opens the endpoint port for portainer.  You can add the endpoints and then use the drop-down menu to view the local and remote hosts.  This can be very helpful with a swarm.  I have set up a single manager in my 3 node swarm, since everything should be kept relatively small until requirements grow in a home lab.  Depending on the services that are built, multiple managers may be a requirement.  Future plans from within the docker realm include building a container "from scratch".  I was left without any good guides on this last year, so I figured I should try to add one.  Instead I ended up getting bored and creating a local registry with self-signed SSL certs.  I will also probably revisit that with LetsEncrypt certs.

A fun little aside paraphrased from something that I read on a forum recently:  Sometimes I forget to type in sudo prior to a command that requires elevated privileges.  To run the previous command in bash, you type "!!".  You can escalate privileges by typing "sudo !!", a command I refer to as "Bitch, I said".

Tuesday, March 6, 2018

Preparing for PiHole container

Pi-hole is a nice replacement for the DNS that is usually included on a home router.  It blocks ads, and can be configured to do a few other things to reduce home unnecessary or annoying traffic on your home network.  I went into this one completely blind.  I didn't read any real install guides except for what was on DockerHub.  It turned an install that should have taken a few minutes into about an hour of toubleshooting, most of which had already been documented elsewhere.  I still have not dug too deep into configuration or tinkered with it beyond setup, but i figured I should document the absolute basics prior to setting up the container.

I switch between my Ubuntu desktop and Photon virtual machines as my docker hosts.  For something of this nature, I have found that a virtual machine tends to work out a little better than a workstation.  The main reason is that workstations tend to have little bits and pieces of configuration changes to suit the user.  Configuring the Pi-Hole container means that you will change /etc/resolv.conf and need port 80 available (according to documentation).  The instructions should be similar across Linux docker host environments.

First thing to take a look at is if you have access to port 53 on the host.  The lsof command is an excellent way to identify if it is being used, and what might be using it.  The command is "lsof -i :53". Once you identify which service is using the port, you need to copy /etc/resolv.conf to /etc/resolv.conf.backup.  Then you can stop and disable the service.

Aggregate 1
Good old article about finding port info

Once you kill the service, you will need to take a look at the original resolv.conf file.  Perform an ll or ls -al of the file, "ls -al /etc/resolv.conf".  In Photon it is a symbolic link, and will need a new link to work.  You can link the backup over, "ln -s /etc/resolv.conf.backup /etc/resolv.conf".  You should now be able to run the lsof command or netstat to see that nothing is using port 53 anymore. 

There's been some minor chatter about different environment variables being required when running this container depending on version.  That will have to be a trial and error situation.  For the version I loaded, the key environment variables that I set were:

ServerIP                             Probably a little more important if you are running it on your workstation.
WEBPASSWORD             Because we want the security.
DNS1                                 This did not load as a default in my build.
DNS2                                 This did not either.

The rest of the setup was exactly the same as what was described in the DockerHub documentation.

Aggregate 2
DockerHub PiHole

My next issue was an obvious RTFM problem.  I was testing out the container on two different instances of Photon.  One of them holds an Nginx reverse proxy, which i was going to set up to proxy the admin page of Pi-Hole.  Unfortunately, I did not realize that the admin page is actually http://dockerhost/admin.  I was just typing in http://dockerhost or http://dockerhost:port-number for the system I was going to proxy.  Sadly, I had to RTFM to figure out that I needed to add /admin to the url. 

My final problem was understanding what was going on in the logs.  There is a repeating query of pi.hole in the logs.  That is because the container will constantly query itself.  I thought the container was locked up.  When you combine the fact that I was using port 8080 (which did not render the Pi-Hole webpage correctly) with the constant query, it looked like a badly broken system.  Nope, that's just what it would look like if you took a look at the activity of a brand new DNS server. 

So, now I give it a 24 hour run to make sure it does not crash by itself, and then slowly migrate a few of my active systems over.  The best test method for my environment is connecting gaming systems, workstations, and then full roll-out.  I urge a slow roll-out to allow enough time to recognize issues in specific systems and gain an understanding of how to resolve them rather than trying to fix a hundred problems at once.  This should also integrate nicely into Home Assistant.

Sunday, March 4, 2018

Common docker run commands explained

Most of my posts involving containers will include Portainer as a graphical front end.  I was bitten a few times with a fat fingered command going to the wrong port with the wrong name.  I've had to kill off a container and try to figure out an environment variable value, lose track of what I was doing, and try to dig around in my bash history to remind myself of what I was going to execute.  I have no problem doing this at work.  I am usually able to focus on the task at hand without significant interruption.  That is not the case at home, and certainly not while researching how to do more than just copy a command to implement a pre-built container.  But, if you want to go fast, command line is how to do it.  This blog is to cover the most common flags you will encounter when implementing a container from github or dockerhub.  Let's break down a few commands that have some common flags, and what they do.

docker run -d --name="home-assistant" -v /path/to/your/config:/config -v /etc/localtime:/etc/localtime:ro --net=host homeassistant/home-assistant
Docker is a pretty well laid out application.  To start off this command, we invoke docker, tell docker the command it is going to perform is run, which initializes a container.  The -d flag tells it to run in the background, without this flag it can take over your terminal.  We then name the container, which makes searching for running containers much simpler.  The -v commands give the container mount points.  The /path/to/your/config:/config is for a persistent volume on the docker host that survives reboot, and it will be mounted in the container as /config.  The /etc/localtime:/etc/localtime:ro is binding the local directory to the same place in the container as a read only mountpoint.  --net=host allows all network traffic to travel through the host to the container, this prevents passing through every port.  homeassistant/home-assistant is the name of the container on the registry.

Let's go for another example with a few more common flags.

docker run \
-d \
--name plex \
-p 32400:32400/tcp \
-p 3005:3005/tcp \
-p 8324:8324/tcp \
-p 32469:32469/tcp \
-p 1900:1900/udp \
-p 32410:32410/udp \
-p 32412:32412/udp \
-p 32413:32413/udp \
-p 32414:32414/udp \
-e TZ="<timezone>" \
-e PLEX_CLAIM="<claimToken>" \
-e ADVERTISE_IP="http://<hostIPAddress>:32400/" \
-h <HOSTNAME> \
-v <path/to/plex/database>:/config \
-v <path/to/transcode/temp>:/transcode \
-v <path/to/media>:/data \
plexinc/pms-docker

This is laid out a little different to break down the commands better.  This will start a media server on your network, which needs to be authenticated within 5 minutes.  I wanted to offer that warning in case the command is run and then you can't figure out why it isn't working.  The long list of -p is the ports forwarded to the host, and tcp or udp based on the required connection type.  Since it streams video and audio, using udp is preferred on some ports to reduce network traffic.  Underneath the long list of ports, the environment variables are called out with -e.  This gives information to the container without having to implement it in a config file that gets mounted.  In a previous blog I had to bind the time on the host to the time on the container, since the timing requirements are not as tight on this server, passing through the timezone works in this scenario.  It should be able to perform some type of time lookup from within the server.  The -h command gives the server application a user friendly name.  It's much simpler to find the server on the network when it isn't some 30 character alpha-numeric string when you're adding it to a client.  This command is an excellent example of what a docker command generally looks like.

Aggregate 1
Example 1

Aggregate 2
Example 2

Not shown in the commands here are a couple that I have used in previous blogs.  The first is adding devices, which may be a requirement for some containers.  The command is --device /mountpoint of device:/mountpoint on container.  If you need privileged mode, add a :rwm to the end for read, write, and mknod.  My example is in adding a Z-Wave dongle to a container.  "docker run -d --name zwaver --net=host -v /var/lib/docker/volume/config:/config --device /dev/zwave:/dev/zwave:rwm zwaver/zwaver".  The missing piece is making sure we have our restart policy setup correctly.  --restart can have the options of "no" which is helpful if you built your own container.  Use "on-failure:#" to give it a specific number of retries, if it relies on another container, something is missing, or you need to verify a configuration.  Use "always" for a self contained, known good, container.  And finally, my favorite "unless-stopped", which is great for the containers downloaded from Dockerhub that do not require much configuration.

Aggregate 3
Full run command list

Less common commands

As home users, we are probably not incredibly concerned with internal docker DNS.  Most of what we might want to do with container to container communication can probably be accomplished with environment variables.  If you were to look at the internal network of a docker host, you may have a few dozen network connections on an internal subnet.  That internal network is where the containers are free to communicate with each other, without many restrictions.  If there is an issue with container cross talk or you need to segregate traffic between containers, new bridges can be added to segregate the traffic.  You can specify a bridge with --network=bridge if you do not want your container running on the default bridge.  This can be especially helpful with databases that need to communicate with web servers.

There are a few methods to define resource utilization, restricting the container to a certain amount of CPU, memory, and IO.  If you intend on using these commands, I suggest running without them to test utilization.  The only time I have used these is during some labs to establish functionality of the commands, but they might be helpful if you are running a ton of containers from your main workstation or mining crypto-currencies.  I've used -m to limit memory to a specific amount, usual 512m for half a GB.  For CPU usage, I've set cpu=0.25 to give it one quarter of a CPU.  I have not had to set any disk IO settings, but everything I have done is either on an SSD or a ZFS pool with a considerable amount of memory.

Aggregate 4
Limiting resources

Hopefully the information offered was enough to allow a better understanding of what is happening when a docker run command is executed.  The intention is to be a tool for a beginner on what is going on when they copy and paste a command to start a new application.  This should also help convert a docker run command into what parts need to be added to different fields in a Portainer configuration.  As computing moves further into the cloud and internet of things spaces, containers are the underlying technology making it possible.  I found the official docker labs to either offer too much information, or not enough, when I did them.  But don't take my word for it.

Aggregate 5
Docker container lab

Containers: Portainer and Home Assistant

The fun stuff you can do with smart home devices is generally reliant on having a smart home hub.  You can set up scripts in your devices, or string together a few pieces and parts with apps like IFTTT.  But to take it to the next level, you probably want to start scripting automation.  I was a little let down recently with certain companies deciding to include the Zigbee standard with radios or the Z-Wave standard with radios, but not both.  You can easily add the pieces and parts you want/need with containers on a Linux system.  We should be able to pass the device through, and verify it's presence on the Home Assistant container.

The instructions will rely as heavily as possible on graphical interfaces.  Just make sure Docker is installed and running, and we should only need to touch the command line to build Portainer for the entry level guide.  There will be plenty of opportunities to write code to configure and automate the devices after the install is complete.  There will also be a method to pull configurations off of a Home Assistant install on a Raspberry Pi and migrate it over to the container.  That is how the decision was made to start this particular blog.  The Home Assistant install on my Raspberry Pi Model 1B is not as reliable as I would like it to be, so it is time to build a much more reliable container.

More computing power than Sputnik

I have covered the Portainer install in another entry, but we are going to go over how to make that container available immediately on reboot, in case you end up with a power failure or just need to patch.  The typical method of doing this to the containers started in Portainer is to change the "Restart policy".  But, you can't really do this from Portainer to the Portainer container, at least not in the versions I've used.  This doesn't really matter, because we can issue a command for the setting.  When executing the docker command to start Portainer, we can add the restart flag.  I always choose "unless-stopped" so there is a lower risk of having a failed container stuck in a loop of attempted starts and failures.

Aggregate 1
Basic Portainer install instructions

In previous examples, the command to issue was:
docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

The new example should be:
docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data  --restart unless-stopped portainer/portainer

You can then setup log in using the IP of your docker host with the port specified in the command, port 9000.

A fresh new install

If this is a typical home hobby user, and the system is going to remain on the internal network only, there isn't much to do for administrative configuration.  You can add a new administrative user, and maybe set up a group.  The biggest thing that will need to be done is adding the volumes to the environment while we prepare to add Home Assistant.  This is where the advanced users can play around a little.  I have a discontinued device, so my instructions for adding a Z-wave/Zigbee USB dongle will need to be modified based on the settings for the device purchased.  This is not a required configuration, but it does allow a home grown Z-wave or Zigbee network if you want to play around with one.  



Recently discontinued dongle

A neat trick that you can pull off with containers is adding a flag to bind a file to the container.  Since Linux treats everything like a file, that means we can bind the Nortek (or your device) to the container.  First thing we need to do is create a file for persistent storage of the Home Assistant configuration.  So, go to the Volumes section of Portainer and add a volume called hassconfig.  Since Home Assistant is on Dockerhub, we don't need to pull the image prior to running it.  Go to the Container section and select "Add container".  We will give the container a name we remember, Home_Assistant is a good choice.  In the image configuration we will give it the name homeassistant/home-assistant so that it pulls the latest image.  Set the restart policy to "unless-stopped".  We will want to set the "Network" to host so we can scan for devices.  We will add the volumes as hassconfig -> /config, and then bind /etc/localtime as a read only mount for /etc/localtime.  If you have a Z-Wave/Zigbee device, bind the device to the matching location (/dev/ttyUSBx  -> /dev/ttyUSBx or /dev/zigbee  -> /dev/zigbee) under the Runtime & Resources tab, and flip the switch to Privileged mode.  

Once the configuration is complete, you should be able to reach the Home Assistant web interface by going to http://dockerhost-ip-address:8123
This is also considered an alternative install, so we will need to modify the configuration file when we want to add things.  My recommendation is to look through the Available components and go through the steps outlined for the components you choose to add.  You should have a functioning system ready for configuration, and capable of being a smart home hub if you follow the instructions properly.

Aggregate 2

As a final note on how to configure the Z-Wave and Zigbee devices, I'll refer to them as Z* from here on.  There may be issues when you perform a kernel update or accidentally unplug and replug the devices in.  The Z* standards have the devices show up as COM ports, like an old serial cable connection.  This serial connection is part of the reason that the old IBM communication standard of MQTT is used for communication between Z* devices.  As a best practice, it is a good idea to make sure that the devices show up correctly and consistently when you boot the system.  I've added to my own system the rules to have the devices show up as /dev/zigbee and /dev/zwave by configuring /etc/udev/rules.d/ 99-usb-serial.rules with the lines:

SUBSYSTEM=="tty", ATTRS{interface}=="HubZ Z-Wave Com Port", SYMLINK+="zwave"
SUBSYSTEM=="tty", ATTRS{interface}=="HubZ ZigBee Com Port", SYMLINK+="zigbee"

You can then bind the /dev/zigbee and /dev/zwave without worrying about the devices being reconfigured.

Coming soon, posts about creating containers from scratch and communicating across MQTT.


Pi Zero W Security Cameras Part 2: Internet Accessibility

The best part about breaking the Security Camera blog into parts is that there is a demarcation point of how far the security system can go.  This post explains how to get the system accessible from the internet.  Another nice part of this is that we will be abstracting away from the hardware and away from the operating system.  We will set up an Nginx reverse proxy to our hub from part 1, verify web connectivity, and verify that we can reach it from the internet.  There are some options on how to do this, and a few warnings.  First up is that the instructions will be for building an Nginx container.  I am using instructions for Portainer.io as the management console instead of command line to make the instructions more accessible to the masses.  I am running my container from a virtual machine that I have set up in my environment, but you can add the containers to one of the cameras or a physical system in your own environment.

This will be broken down into a few parts, you should be able to verify that each part worked properly before continuing on.  

Parts

1. Hub configuration and container setup
2. DuckDNS
3. Nginx configuration and SSL certificate

Hub configuration and container setup

Log into the MotionEye Hub and select one of your cameras from the drop down.  In the settings, select Video Streaming.  Enable the streaming with basic authentication.  Add a password and change the name of the surveillance user.  Apply the settings and go to the url in the Video Streaming menu, using the surveillance username and password to login.  I have not had any luck with digest authentication in apps or through a browser when enabling it on the MotionEye hub.

Video Streaming

We now get to add our containers into the mix.  Go to your system that you intend on running containers from and install docker.  Once the docker engine is running, you can perform the command "docker pull portainer/portainer" to get the latest portainer image.  You then execute the container with "docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer".  You can now access portainer by going to the url http://ip-address-of-docker-host:9000.  Setup the admin account and login to the dashboard.  There's an amazing amount of functionality in this container, some of which will be covered in a future blog about the basics of establishing a secure container environment.  For now, we are going to use it as is.

Aggregate 1

Install the Nginx container by going to the Containers tab and selecting "Add container".  Give it a name you will recognize on the top line, and add the official image by adding nginx:latest in the image configuration.  At the bottom, under "Restart policy", change it to "Unless stopped". Under "Actions" select the "Deploy the container" button. You should now see your Nginx container running in the Container menu.  If you click the name of your container, you can access the console of the container to look around at what is inside of it.  If you have any issues pulling the container, we will cover that in the Nginx configuration section.

Nginx Setup

DuckDNS

In order to get a working certificate and easily reach your home system, you will need a domain name.  Fortunately, you can just tag along as a sub domain of DuckDNS for free.  The real problem that I've noticed is coming up with a unique namespace.  For the benefit of the reader, we will pretend that aggknow.duckdns.org was not taken and use that throughout the article.  Go ahead and log in to http://www.duckdns.org with whichever account you want, or create a persona account. Once you have logged in, type in the sub domain that you would like, and see how many tries it takes to get one that wasn't already taken.  After you have the sub domain, click the install button on the top of the page.  This is where they did the world an enormous favor.  You select your sub domain from a drop down under the "first step" text and determine where you want to set it up.  Once you select your preferred method, scroll down and follow the instructions to get it updating regularly.


Rubber Ducky

A little consideration when setting this up, you probably only need one sub domain.  If you need more than the free tier of 5 sub domains, you are probably working with cloud based websites.  Once you have DNS working, you can forward whatever ports you want to your router and reach them through the internet using your DuckDNS address (although I would limit the number of forwarded ports to only what you absolutely need).  So don't get bogged down trying to think up a unique name for every computer you have, you only need one sub domain name.

Nginx configuration and SSL certificate

We will need to write out the Nginx configuration and add it to our host in order to have a working website.  We will also need SSL certificates in order to have a proper security configuration.  Let's begin with explaining the SSL certification method.  This will be a containerized LetsEncrypt application.  It will renew the certificates for you, which is very helpful.  The issue that I have found is that you can get blocked very quickly from the LetsEndrypt server if you do not set up your configuration correctly.  You get 5 chances before you are locked out for an hour.  So double and triple check your configuration prior to starting the modified containers.

You will need to create 4 volumes on portainer:

1. certs
2. conf.d
3. html
4. vhost.d

The certs directory is where the certificates will eventually end up.  Pay attention to permissions when we connect these to our containers.  The conf.d directory is where you put the code for Nginx to execute.  Because containers are static, you have to add volumes for anything you want to change and keep persistent through reboots.  The html directory is the basic "welcome to Nginx" splash screen default, but you can add html pages to serve out in here.  It is also where LetsEncrypt places it's challenge to validate the domain before issuing a certificate.  The vhost.d volume is where the challenges are replied to from other containers, and I'm fairly certain that we are not using it.  Add the volumes in the Volumes menu on portainer.

I'm going to cover the commands and instructions as well as possible, but almost everything was extracted from the following aggregates.

Aggregate 2
Proxy companion instructions

Aggregate 3
Nginx config starter

Aggregate 4
Another Nginx config example

If you had trouble installing Nginx, "docker pull nginx" on the command line of your docker host should get you the latest official release of that container.  We can now create the container in portainer quickly without adding another registry to the configuration.  Give the container a name that you will recognize, start to type the name of the Nginx container you pulled and select the auto-filled name, or turn off the Nginx container you built and select Duplicate/Edit in that containers menu (double click the container).  Select mapping an additional two ports, 80 on host to 80 on container and 443 on host to 443 on container.  You will also need to forward these ports on your router to your docker host.  You can kill the router port forwarding on port 80 after you get your certificates.

In the Volumes menu at the bottom:

1. html to /usr/share/nginx/html
2. certs to /etc/nginx/certs (as read only)
3. conf.d to /etc/nginx/conf.d
4. vhost.d to /etc/nginx/vhost.d
5. bind /var/run/docker/sock to /tmp/docker.sock

In the Env menu add LETSENCRYPT_HOST variable name with your domain name as the value.  Add LETSENCRYPT_EMAIL variable name with your email as the value.  The domain name should look like this in the value: aggknow.duckdns.org,www.aggknow.duckdns.org
You should be able to start the container without much issue at this point.  You should now be able to modify the file on the docker host at /var/lib/docker/volumes/conf.d/_data/default.conf to the following aggregate.  For the first run you will comment out lines 3-6 and make sure line 2 is uncommented. 

Aggregate 5
Nginx config

We will need to pull the container for our LetsEncrypt companion.  From the command line of your docker host: "docker pull jrcs/letsencrypt-nginx-proxy-companion" will get you the latest version of the container.  Go through the same portainer setup of a new container, give it a name and start to type out the container name.  You will add the same volumes with 2 big differences.  You will not map conf.d and certs will not be read only.  Start the container, double click it and select the logs button under the start time.  The Diffie-Helman generation can take a few minutes.  Keep your eyes on the stderr logs.  If anything went wrong, hit Google and leave a comment on the blog of what the error was.  It might also be worthwhile to reboot the Nginx container and watch the logs on both of the containers.  If you generated the certificates, it's time to implement them.

From the docker host, edit the file /var/lib/docker/volumes/conf.d/_data/default.conf and comment out line 2.  Uncomment lines 3-6, replacing the certificate and key with what you have at /var/lib/docker/volumes/certs/_data/"your domain name .crt and .key", stop forwarding port 80 on your router, then restart the Nginx container.  Verify you can reach your cameras by going to https://your.domain.name/camera-name and use the surveillance username and password to log in.

There are plenty of apps that you can use to rapidly connect to your cameras through your phone.  Some are paid and offer bells and whistles, others are free and have advertising on the top and bottom of the stream.  Just keep in mind that some of the apps will require authentication every time you select which camera you would like to view.  As usual, be vigilant about permissions when adding the application.  An app that is used to log into a remote website with credentials should not require access to your call history or wireless settings.  Check the ratings, install info,  and if you feel especially security minded, perform a packet sniff on the app to see what it does when it is running.

I apologize for not including the steps to access the cameras from Amazon devices in this blog.  It turns out that I am waiting for a few things to update as well.  The Alexa developer console has changed format and the documentation is currently for the previous version.  So, until there is a reliable form of Amazon documentation, the Amazon devices instructions will be a work in progress.

3d design for printing

I don't want to sound like an idiot.  I really don't.  I just lack the patience to learn Blender.  It's not just because the nam...