Monday, May 28, 2018

Why was this missing?

I've done a pretty good job of making amateur mistakes when I know better.  Modifying an xorg.conf file without making a backup is usually a favorite.  About 15 years ago I would rebuild the box because I didn't know any better.  I think I wrecked a file consistently for years trying to add additional mouse buttons to the configuration.  This time I did something ridiculously stupid.  My container hosts are mostly virtual machines.  I should be able to clone them in a few minutes.  Instead of doing that, I attempted to migrate all persistent storage to a NFS share.  As expected, it failed miserably.  Nothing important was lost, but it stung a little knowing that I had just wrecked something because I didn't even bother to look up the correct method of doing something.

I began my quest for information on the correct way to mount a share and use it for persistent storage.  Unless someone tells me otherwise, it looks like the correct method for a home lab is to install another package to allow NFS mounts in docker.  A package specifically for docker that is not included, and doesn't get the publicity it deserves.  I was quite surprised that it wasn't included on the host image. 

Aggregate 1:
Netshare plugin for docker

Of course the next step is figuring out how to use it.  Fortunately someone already wrote about that.

Aggregate 2:
NFS on a swarm

This is a pretty big deal to me.  There were some things missing in documentation that I figured out from searching around and tinkering, such as the port used for Swarm being blocked by iptables.  The absence of shared storage instructions may have been one of the reasons why my first attempt at creating a service and swarm was a failure.  I don't recall what all failed in that environment, but I do know that the ability to migrate containers with their own persistent data from one host to another should not be an after thought.  For something like the MotionEye container, the shared storage is considerably more important.  A simple NGINX config file is not hard to replace.  Security camera footage causing all containers to fail because the storage filled up is a problem.  That is exactly why I attempted to migrate everything from the local storage to a share.

It might just be the Photon hosts that I use.  So, coming soon, a blog about setting this up in Portainer and using it to create services to deploy to a swarm.

Sunday, May 27, 2018

Add an APC (or other) UPS to Ubuntu 16.04 and Netdata

Netdata has become a method to push for actual statistical data on a system.  In my line of work, the actual statistical data of what a computer is doing while running an application is worth as much as the application.  That is the nature of computer science.  We have figured out creative methods of adding Netdata to running systems, including diskless nodes.  I have been running it at home for months, yet I didn't even look at the glaring issue of what was missing until I added it to my NAS.  I run FreeNAS at home, and you can look at their website to see my blurb about how much I like it.  It really does everything I want it to do.  It was ridiculously easy to add a battery backup and configure it in the UPS settings.  Then, I noticed that there was a service for Netdata included.  Turned it on, and now I can see the battery backup in a browser window when I look at Netdata.

The issue I ran into with my home lab box was not with Netdata, it was with the configuration and installed resources on my box.  Ubuntu 16.04 requires apcupsd to be installed.  I thought that it was automagic, since the system showed the battery in the status bar next to audio and language.  Nope, you need to install the package for battery backup.  So, I installed apcupsd and configured it. 

Aggregate 1:
Install apcupsd

Not sure if that will fix the problem alone.  I changed the UPSNAME to my UPS name.  I have a 1500 because power sucks at my house.  Modified the existing /etc/apcupsd/apcupsd.conf file as a copy rather than copy over the listed config.  The big secret to all of this is that I had to change the configuration of the status to allow it to be read.  I found out that I needed to change it from 0 to 1 in the config.

Aggregate 2:
Change the status

Next up is modifying the Netdata stuff.  Perform a locate to find out if you are changing all of the configs in the same place.  If not, copy and paste as necessary.  The key files you need to uncomment are listed in the Netdata configuration helper. 

Aggregate 3:
Netdata configuration helper

Then you just need to pay attention to the logs listed in /var/log/netdata just like grandpa had to. 

Shitcoin

I don't like bitcoin.  Never have, probably never will.  It is a speculative commodity that is based on nothing.  It is not currency.  It is damn near impossible to stretch the truth far enough to make it sound like a currency.  To add insult to injury, web browsers are having to incorporate measures to prevent miners from taking advantage of users that go to the wrong website.  That's right, you can have people mining on your box because you went to a blog.  I will do everything in my power to prevent that from happening on my blog, but mostly that is up to Google.  Even if you are a fan of crypto-currency, this is a problem.  You should not be mining for strangers.  So here are some things that help prevent it.

I found out about the latest and greatest from Firefox today, an extension that shuts down mining.  There are also extensions for Chrome, but they might be a little questionable.  The worst method of preventing mining is also the easiest, turn off java script.  Then you can't use any websites.

Aggregate 1:
Kill it on firefox

Aggregate 2:
The #1 on Chrome

Any method you choose to use, do it.  Allowing people to mine on your system is ridiculous.  If you want to do it, that's your choice.  Don't let people waste your electricity for free.

EDIT: Adding onto this due to breaking news over the past week.  There were contaminated Docker images on Dockerhub.  Surprise, crypto mining software was inside the containers.  I am starting to become entertained by this.  I get it, everyone wants a get rich quick scheme.  The funny thing is that now people are trying to get you to unknowingly run code that does not destroy your files in order to make money.  It's probably the best case scenario for malware.  It's not even ransom ware.  While it is still annoying, and a wake up call to make sure you are pulling from good sources/reviewing what is in the container, it could be so much worse.

Friday, May 25, 2018

Obligatory ML/AI blog

I was told by my NVIDIA rep that I am on the right path and at the very beginning of it.  I'm trying to set up some basic AI dev environments for people to screw around on and start thinking about what they want to focus on if they decide to create a project.  Enterprise grade AI dev environments are really big compared to what you might have at home.  This isn't a single user with a graphic card running TensorFlow.  Nope, it will be about 5 users with graphic cards running TensorFlow.  I am using it as a method to determine demand before I go and grab whatever purpose built chipsets I can get my hands on.  Since I am looking into new methods to support potential customers, I figured I should share my very basic research into AI.

First off, I'm using nvidia-docker.  This decision was made lightly, because it provides the portability and resource configuration capabilities that I need.  Also, I am using a 1070 in my system at home.  Second, I am using TensorFlow.  The other major offerings come from companies like Facebook, which don't deserve the leniency that Google has earned after years of not being offensively obvious about data mining.  There are more blogs about setting it up than I can link, but I will share a few things with you about this so I can complain on a public forum.  Not many instruction sets include merging your docker config file and there have not been many updates recently (as of this writing).  Let's get to it.  Nvidia-docker install with a merge to make it work!

Aggregate 1:
The basic install instructions

Aggregate 2:
Adding the NVIDIA runtime to docker

After these steps are followed, in a bit of an aggregate 1-2-1 kind of way to make nvidia-docker work, you should have a working copy of TensorFlow.  The annoyance after that was Chrome becoming unresponsive.  I have netdata running on my box, and there was absolutely nothing out of the ordinary the first few times I ran TensorFlow and had it crash my browser.  I then opened a new Chrome window exclusive to Jupyter notebook and everything ran fine.  One thing stuck out, and it was mildly hilarious.  TensorFlow complained when the amount of available RAM dropped below 4GB.  It was in the shell that I had executed from initially, but I looked at it wondering how often it would show up in the logs when placed into a dev environment.  Not to sound like a crusty old admin, but 4GB is a lot of RAM.


Once the Jupyter notebook was accessible based on the instructions, I went ahead and ran all of the sample notebooks.  Running them in their own window earned me the ability to complete all notebooks without having chrome lock up.  At the end of notebook 3, they mention that you can run one of the last sets multiple times to increase accuracy.  It does do that, but it will take some time.  My graphic card chews through it in under 5 seconds, but validation error decreases by about .1% after every 5-10 runs once you hit the 1.5% test error threshold.  Next steps, figuring out how to export what the system learned.  Not a good problem to have, considering that many users can't get their systems to complete the sample notebooks.  That means documentation goes down by about 90%, and I may not have much to share that isn't behind a pay wall. 

What I have learned from the obligatory AI test is that there is still a whole branch of computing that requires support and security that may be overlooked for the time being.  But it is coming, and we should all push for this to become a highlight with the admins.  Typical AI uses revolve around big data.  We should also focus on data security regarding input and output to ensure that we do not process bad data or present bad data.  Hardware requirements and messages/errors need to be able to be tuned so that we can capture and configure how the data is processed and determine if there are problems.  These issues are form fitted to the environment, so it will take some effort to determine what is appropriate.

After roughly 20 tests, I have a 0.8% error rate in determining what a hand drawn number is.  I will blog again soon about how to push that knowledge outside of the AI container.

Wednesday, May 16, 2018

Migrating from SSD to M.2 or NVMe in Ubuntu

A couple years ago I started working with NVMe drives at work.  They advertise "up to 6x the speed of SSD", and are pretty neat to play with when working with vSAN.  Of course these are industrial grade NVMe drives, which are pretty much a board with a bunch of M.2 drives on it and a string of capacitors with some other chips sprinkled in for advanced capabilities.  I obviously needed to have something similar in my computer.  So I ordered a 960 EVO and PCI-e adapter.  First impression is that these things are small.  About the size of laptop RAM.  My next observation was that my old box wasn't going to be able to use it as a boot drive, it was just too old.  So I put it in my system to use for ripping and transcoding video files.

Aggregate 1:
Isn't it pretty?

But then I upgraded to a motherboard that has an M.2 socket, so I had to make it my boot drive.  The nice thing about Linux is that it tends to be easier to make backups and move the OS around.  So, I built my new box, popped in the old SSD, booted up and made sure everything worked the way I wanted it to.  Then, I had to study to migrate the data, and find the missing pieces in the instructions.  First step was to create a Ubuntu DVD, I used 18.04 since I knew it would have a decent nouveau driver (got a 1070 in my new rig).  Booted from the disk and ran it live, which takes a long time to complete boot and get to a working screen.  This part was rough because I had to do it a few times due to missing instructions. 

Aggregate 2:
The Bionic Beaver

Once booted, I needed to install gddrescue and copy the SSD to the M.2.  I probably could have figured out how to do this with dd, but I was hoping this would be the one and only step.  For those that intend on pulling out their old SSD after upgrade, it probably could be.  A quick note before anyone begins on this endeavor, check the block size of your disks.  I got lucky with uniform 4k blocks on my devices, but I have seen 512 and 8k block sizes on devices.  A simple blockdev --getbsz /dev/sda1 to view block sizes on that partition can mean the difference between completing the task quickly and having to reboot into a live disk an additional time.  The instructions for using ddrescue are simple enough,   sudo ddrescue -v --force /dev/sda /dev/sdb to clone data from one drive to another.  NVMe or M.2 would look a little more like sudo ddrescue -v --force /dev/sda /dev/nvme0n1  and the full instructions for moving stuff around on bigger and smaller drives are in the following aggregate.

Aggregate 3:
Migrating the OS

The part that I got stuck on was the UUID.  Cloning the drive leaves the new device with the identical UUID of the old one.  I had to make changes to the drive, so I decided to fix it in gparted.  Yes, this can also be done in parted, if you want to get down like that.  The first task was to edit the M.2 drive to use all of the storage, since it has double the capacity of the old drive.  This part is dangerous, you can break the install on the drive you are modifying if something goes wrong.  That is also the reason for the UUID change, so I could easily get back to a working SSD if something bad happened.  Once the capacity change is completed, go to the Partition menu in gparted and give each partition a new UUID. 

Aggregate 4:
Gparted capacity increase instructions

Now that you have new UUIDs associated with the partitions, you need to fix your boot parameters.  What was normally a difficult and boring part of Linux administration is automated, which is great because there are too many ways to fat finger a UUID if you have to enter it manually.  All that is needed is to run boot-repair.  Just make sure you check out all of the options, select the drive you want to repair (nvme0), and only write to that drive.  That way you can pull the M.2 drive out and go back to the SSD if things seem to become too dire after reboot. 

Aggregate 5:
Boot repair, but not from a cobbler

Reboot your system, verify the boot option in the BIOS, and see how fast the system comes to life.  I've heard some optane systems can be up and running before you pull your hand back from pushing the power button.  Once you are sure everything is to your liking, you can wipe the old drive and use it for tier 2 storage, or a new mount point for /var.


Saturday, May 12, 2018

New Hardware (filler)

This is a complete fluff post about my new hardware.  I just got a new Ryzen 7 system with 32 GB of RAM and a Nvidia 1070 card.  I will have some posts about nvidia-docker and AI in the very near future.  I also got a new battery backup, because all of my old UPS systems were trashed, which helped lead to me getting a new system.  So guys, if you need a new system and need approval from a significant other, stop replacing your batteries in the battery backup units.  It only took my system a few years to die when I did that.  RIP 6 year old Ebay used parts special.  Welcome back modern computing.

On an interesting note, I only had to move the hard drive from old hardware to new hardware, then modify some network and display settings (I got new monitors too).  Very simple and easy transition to new hardware.

Bias Lighting and USB

My family is pretty brutal on the television binge sessions.  A new show will come out on a streaming service and we cancel our weekend plans to watch it.  Not a great strategy for shows like "Black Mirror", since you need a day to digest the episode completely, and maybe a trip to a therapist.  One of the items I kept seeing in smart home social media threads for binge viewing was bias lighting.  It's a neat little hardware add-on for your television that prevents eye strain and assists with contrast.  Grays seem grayer, blacks become blacker.  A neat concept, so I did some more digging.

One of the first things that came to mind was the lingering expectation from Phillips to integrate the Hue color bulbs and LED strips into your smart TV or Roku/Firestick/AppleTV/Chromecast.  It's been referred to as "the immersive viewing experience" on the This Week in Tech (TWiT) podcast a few times.  That is not what bias lighting is.  Bias lighting is an attempt to match the temperature of the lights that illuminate the screen at a reduced lumen by reflecting and diffusing the light source.  That's a pretty direct way of saying that the light is going to be as cold as the florescent tubes that light up mental institutions, but it will be behind something so it won't be as bright, and it helps protect your eyes.

Aggregate 1:
What is bias lighting? 

I added the LED strips to my Amazon shopping cart and went to sell the idea to SWMBO.  After about 45 seconds I realized that she knew more about it than I did, and had probably written a paper on why it should be an OSHA standard to have bias lighting on all computer monitors.  She was already sold, so all I had to do was hit the checkout button.  Here's where things become a little "first world problem".  They sell inexpensive stick-on kits, which are great because I love to save money.  The problem is that it is a one size fits all for a range of sizes.  One that fits a 55 inch TV can also support a 75 inch TV.  You can buy specific lights for your television brand and model, for a price.  But I am unwilling to pay a 300-500% markup to have one that is specific to my TV, which probably requires taking a mounted television down so it can be gently placed on, and the adhesive is then given hours to cure.  So I got the one size fits all.

Aggregate 2:
Extra large LED strip

I went ahead and wrapped my largest TV, with no excess LED light strip to deal with.  Looks great, just have to adjust the angle to prevent a very close and direct reflection of the LEDs on the bottom.  The plug is a cute little integrated USB that plugs straight into the screen and turns on as soon as the TV does.  Next, onto my smaller TV.  I ended up with about 4 feet of excess, and I was able to easily cut it off at the "cut here" mark.  I'm one of those people who grew up with an education of the environmentalist 3 R's: Reduce, Reuse, Recycle.  4 feet of copper and LEDs surrounded by plastic shouldn't end up in a landfill, it's wasteful.  So, I did what any geek with a soldering iron would do.  I made another bias light.

Noted in aggregate 1 is the ability to use a single strip in the center of a TV that is not mounted.  One of my kids has an unmounted TV that would be able to make use of the excess.  The trick was to connect it to USB so that it could be plugged in and used just like the others.  The pinout of USB 1 and 2 is dead simple.  You have red (5 volt power), black (ground), green (data in), and white (data out).  We are interested in the red and black.

Aggregate 3:
USB pinout

Caution: I believe certain USB 3 and C cables and interfaces can reach higher than 5 volts without making a data connection, but don't quote me on that.  Verify with the cable manufacturer and do as much homework as necessary if you need to use those cables.  

To connect the USB to LED, simply solder on the red to 5v+ and black to 5v-.  I recommend cutting the USB cable at around 6-12 inches, whatever allows the least amount of bend and doesn't protrude out from the side when plugged in.  The remaining portion of the USB cable can be saved for experiments with serial and network connections or as a replacement for a damaged cable.  Worst case scenario, find an electronics recycling facility or event.  To solder, tin the pad and wire, then solder them together.

Aggregate 4:
Solder wire to board

Once you have soldered everything together, pull the adhesive protection on the LED strip down enough to wrap about an inch of the LED strip with electric tape.  Wrap from an inch above the solder to an inch below where the USB cable was cut open.  Plug it in to verify that everything lights up, I recommend an appropriate USB power adapter that is plugged into a socket that won't take down all of your power if you have a short circuit.  I'd show pictures of the final product, but I'd get yelled at by a 12 year old about how I wrecked his game of Fortnight.  The luminoodle is tuned for USB, and is ready for this project.  I'm so glad I didn't have to add tiny surface mounted garbage to make this work.

Note:  Most TVs are USB 2, which can be determined by the color of the USB interface on the TV.  White is 1, black is 2, blue is 3, red is the newfangled version referred to as C (as of this post).   Make sure you verify and test with each type prior to leaving it plugged in without supervision.  Don't burn down your house because you refuse to test something you built out of spare parts. 

Wednesday, May 2, 2018

Backslash or no Backslash?

One of the minor annoyances that I encounter regularly is a common one on the command line.  People who primarily use Windows tend to be big offenders, mostly because you don't tend to have the same issues in Windows as you do elsewhere.  It's the folder and file name with spaces issue.  Most of the Linux professionals I know use an underscore in place of a space, which helps with scripts and config files.  Windows users tend to name things with spaces in the file or folder name, because it generally doesn't matter in their shell or batch files.  It still messes me up when looking for something and I forget to add the backslash to get to it.  An example of what it looks like:

Windows  C:\Documents and Settings\User\Documents\I bet you hate spaces
Linux  /home/User/I\ bet\ you\ hate\ spaces

It's really not a huge deal, but it has become ingrained in my head as the correct method to deal with spaces in config files and such.  But the GUI doesn't care about how we do it in the shell.

I had decided to configure Tautulli to look a little deeper into my Plex server, and get a good idea of system usage.  This ended up being a little bit of an ordeal for me, since I have kept Plex in a jail on my NAS since I built the NAS.  That meant that I had to add a folder on the NAS to share to the jail, migrate the log data and mount point from the jail to the new storage, and then point to the new storage from my Tautulli container.  Somewhere in there I also had to mess with users and groups to make sure everything went smoothly.  So, after about an hour of messing with stuff, I was ready.

Aggregate 1:
A nice addition to Plex

Aggregate 2:
Tautulli Container!

Used the same configuration info on the Tautulli Docker page to set up the container in Portainer, and received the error that it could not find the log mount point.  I started thinking of ways to check since it wasn't really getting far enough to generate anything useful in the logs.  So, after about 10-15 minutes, I decided to remove the backslashes and see what happens.  Of course that worked. Which was annoying since the timezone entry had to have capital letters and used an underscore, just like a normal shell command.

I guess the lesson of this blog is to remember that sometimes there is a difference configuring things in a GUI and a shell.  And now I can see who is watching my server.

3d design for printing

I don't want to sound like an idiot.  I really don't.  I just lack the patience to learn Blender.  It's not just because the nam...