I've been working with virtualization for about a decade. I've either set up, managed, or tested the vast majority of hypervisors on the market. The few that I will immediately discard are Xen and Hyper-v, because they put more effort in attempting to disrupt the field than they did building out a capable product. Hyper-v is great if you are a Windows shop, but even Microsoft is moving away from being a Windows shop (seriously, look at how they accomplish containerization). Xen is just a solid no from the schism that resulted in Xen and KVM about 9 years ago. KVM won, it works better. The best tools I have used are typically VMware products.
For type 2 hypervisors, the ones that run like applications on an existing operating system, I don't keep very up to date. Virtual Box is pretty good at what it does, but it can be a complete pain to work with professionally. Open Virtual Format (OVF) was supposed to be the standard for how the virtual machine files were stored, and Virtual Box has done an okay job with keeping up, but still present problems when migrating from their OVF to VMwares OVF or KVMs QCOW.
Aggregate 1:
Virtual Box, an easy way to annoy the enterprise
KVM is an excellent standard for virtualization. It is definitely in the top 2 for virtualization. The biggest problem is that people gravitate towards it because it is free. They ignore major shortcomings because they didn't have to pay for it. The biggest shortcoming is that it defaults to QCOW for it's files, reducing portability to other hypervisors. Because they did this, there is not a unified version of OVF across the hypervisors with all of the groups working towards a common goal. It also makes porting it to Virtual Box or VMware a major obstacle. This is not my biggest gripe with KVM. My biggest gripe is the Qemu requirement turning a type 1 hypervisor into a type 2 hypervisor. I know they are working to bake in the pieces needed, and may have already succeeded in doing so, but it is still a requirement in every desktop implementation of KVM that I have worked with.
Aggregate 2:
KVM, the #1 choice for linux users
VMware has a couple of type 2 products, which work well. Player allows you to build a VM and works well for testing an application. It's a great way to get a VM up and running before determining if you want to spend money on a full type 2 suite. If you decide that you want to do that, then Workstation has a ton of tools. It is built for enterprise development and can run on both Windows and Linux. Once again, the shortcoming is the OVF format, which VMware is actively supporting on all of their hypervisors. It just doesn't play nice with others in a similar way that Microsoft office applications don't play nice with open source office applications.
Aggregate 3:
VMware Workstation, how much do you want to spend?
I rarely use type 2 hypervisors. I usually am getting aggravated at someone bringing me a Virtual Box OVF that doesn't work because they forgot to unmount some virtual optical disk, or isn't compatible with the system I am on. I run KVM on my home workstation, and have managed it on quite a few systems professionally. What irritates me is that the desktop implementation of KVM is treated like an enterprise type 1 hypervisor on many of the systems I have managed. It is not, and should not be treated like one. This has led to many design flaws that I have worked hard to fix.
Type 1 hypervisors are where it's at. In the beginning of my professional work with virtualization, there were few options. You could run the very old version of Xen that became KVM on a minimal linux system and virtualize a few systems. Or, you could run ESX to maximize the density of virtual machines. This was back when ESX still used RedHat code to boot. I started on ESX 3 without a vCenter server. I consolidated 1 rack of 3u servers into 4u of space. When the bosses took notice, I got funding and took 8 racks of servers down to 6x 2u servers. I upgraded through the years to vSphere 6 in that environment and left for greener pastures.
Since I have left, I have been able to implement vSAN and NSX to get closer to the Hyper Converged Infrastructure (HCI), which I think is really what I had in mind when I was imagining what a Software Defined Data Center (SDDC) should be. This is when I started running into the insanity of competing products. The biggest in management requirements being Open Stack. An amazing product if you have a team that can work on it. There really isn't a good way to implement this monstrosity without a few thousand documents written on how to operate it in case you ever want to take a vacation. Trying to pull pieces out of it to make HCI work without implementing Open Stack is a fools errand. I spent a few months messing around with Open Daylight in an attempt to avoid spending money on NSX. Then I realized that Open Daylight is a toolset for you to build your own solution, not a solution that can be implemented.
Aggregate 4:
Open Stack, never be allowed to take vacation again
This led me to look for solutions that are ready to be implemented elsewhere for less money. I looked into Ceph and Gluster on Red Hat Enterprise Virtualization (RHEV), being naive and thinking that it wouldn't cost money. Ceph was pretty much a non starter on RHEV. Gluster costs money and was documented in a way that looked like drive mirroring across nodes. There wouldn't have been any ideal savings in price when drives and capability were factored in. The worst of the HCI was looking into Software Defined Networking (SDN) on RHEV. They were beta testing Open Virtual Switch. This immediately put them 5 to 10 years behind VMware. The RHEV management console was the definition of janky. So, they lost out on anything bigger than a 4 node cluster for testing, which was wiped when the systems were needed elsewhere.
This is also where I found out that Red Hat was becoming a strange creature. They were buying up a bunch of Open Source projects and charging a significant amount of money for enterprise support. By the time my lab started talking about containers, Red Hat became the worst company to test on. The products that are free and available everywhere became a cost that we could not get over. Since my professional development operations happen out of band, not connected to any other network, their implementation of Docker was a nightmare. A few simple changes could fix it, but it was a bridge too far by the time you added up the cost and lack of turn-key features.
Aggregate 5:
RHEV, 5 years behind competition is up to date
VMware tends to just work. The turn-key solutions are pretty much ready to go out of the box. The worst thing I can say about it is that up until recently, I couldn't really build my home lab out with quiet ESXi servers and not worry about an enterprise grade power bill. It is very expensive to run VMware HCI. But it works and is cutting edge. They also have online Hands On Labs that can help get the staff trained quickly and effectively for free. Before management freaks out about the price, keep in mind that it actually works and it does things that competitors can not do.
Aggregate 6:
VMware, throw money at problems to make them go away
Since I run FreeNAS at home, I had a different solution. Welcome Bhyve. The BSD type 1 hypervisor. I can use my NAS storage as the virtualization datastore, kick off virtual machines, and set up a decent environment for development. The biggest lacking feature that I have noticed is the inability to easily pass through devices on the box that I built. I'm not certain if this is an actual issue, since my hardware doesn't really support some of the things I would like to pass through, like PCI-e cards. But, at some point I would like to add on my z-wave dongle to a virtual container host, so I will be blogging about that in the future.
Aggregate 7:
Bhyve, because I already have the NAS
For almost all of my needs at home, the Bhyve implementation works out perfectly. For anything else, I use the KVM implementation on my workstation. It is the "everything else" box. Hopefully this shed some light on enterprise and home solutions, and how/why to make decisions on what might be a good fit for home and business labs.
Saturday, April 21, 2018
Sunday, April 15, 2018
Etcher if you need it
Etcher has become my method to burn an image to flash media. With the sheer number of Raspberry Pi images that need to be written, I figured I should post about it. Yes, this is filler.
Aggregate 1:
Etcher on Ubuntu
Pretty simple tool to use, and you don't have to look back and forth between a bunch of terminals to determine if the dd of your image is done. If you're like me, there's usually at least 4 terminals open, 27 chrome tabs, maybe a couple VNC sessions, and probably a few web testing apps or protocol analyzers. Skip digging around for the window that has the info you want on it and just use Etcher. Another nice thing about it is that if you need to make multiple images (like security camera images), it is ready to burn the same image immediately. It is pretty much just as quick as using a shell to dd an image.
The biggest shortcoming that I have noticed is the lack of tools to pull an image. This somewhat makes sense, since the procedure is something that could cause some problems. If you are not familiar with the procedure, the workflow is:
Insert card to make image from --> mount card --> verify files --> unmount card --> dd files to .img file.
To make a proper application that does this, it will need to look at storage space on the source and target. It will also need to pop up additional windows to validate files from source prior to unmounting and duplicating to target. It could get tricky to add these features to something designed for people without technical experience to push images to a card.
Aggregate 2:
Image ripping instructions
Etcher is a great tool for the majority of my pi OS needs. Especially when it comes to reducing clutter with my terminals.
Aggregate 1:
Etcher on Ubuntu
Pretty simple tool to use, and you don't have to look back and forth between a bunch of terminals to determine if the dd of your image is done. If you're like me, there's usually at least 4 terminals open, 27 chrome tabs, maybe a couple VNC sessions, and probably a few web testing apps or protocol analyzers. Skip digging around for the window that has the info you want on it and just use Etcher. Another nice thing about it is that if you need to make multiple images (like security camera images), it is ready to burn the same image immediately. It is pretty much just as quick as using a shell to dd an image.
The biggest shortcoming that I have noticed is the lack of tools to pull an image. This somewhat makes sense, since the procedure is something that could cause some problems. If you are not familiar with the procedure, the workflow is:
Insert card to make image from --> mount card --> verify files --> unmount card --> dd files to .img file.
To make a proper application that does this, it will need to look at storage space on the source and target. It will also need to pop up additional windows to validate files from source prior to unmounting and duplicating to target. It could get tricky to add these features to something designed for people without technical experience to push images to a card.
Aggregate 2:
Image ripping instructions
Etcher is a great tool for the majority of my pi OS needs. Especially when it comes to reducing clutter with my terminals.
My lab equipment and why
This is another filler blog to hit a quota. There's nothing of distinctive interest in here, but keep reading if you want to know how I set up my home lab, and some recommendations on setting up yours.
Aggregate 1:
For my main server, the workhorse of my household, I have a FreeNAS system running on a C2750 board with 32Gb of RAM. Nothing fancy or special, like a re-commed PowerEdge or some other dual Xeon that sounds like a helicopter in my house. I have been using Bhyve to virtualize some Photon instances on, and run my Plex server as a jail. It also houses my NFS and media shares. I highly recommend putting in a FreeNAS system for a home lab. QNAP is a nice alternative but I really love ZFS in a way that had coworkers making fun of me, so I needed a system that gave me the full system access that FreeNAS does without the headache of building out some Linux system that didn't have the plugins. This system was an upgrade from another server about 5 years ago, which was an upgrade from PMS running on a linux box installed about 10 years ago.
Aggregate 2:
I have plenty of media endpoints throughout my house. I've tried out the Fire Stick, Roku, MythTV (yeah, I went there), Playstation 3, Xbox 1, Xbox 360, and Chromecast. Roku wins. It does everything I want. I will gladly recommend spending the money on the best Roku device you can get. I may have a bias towards Amazon, but not in the home entertainment ecosystem.
Aggregate 3:
My home lighting is a big deal. It started when I was younger, and is kind of funny. When my mother would wake up and check on my brother and I, she would turn on every light between where she was and we were. My father spent a good amount of his life turning off every light in the house. I have groups set up to turn on and off lights, using 90% GE Z-Wave light switches. I have a Couple Zooz, and a few Ecobee Switch+ units as well. My problem with Zooz is that they ruin consistency. If my smart switches are smart, I want them to be consistent. Zooz is great for people who want to save money on smart lighting and don't care about consistency. You do not need to buy add-on switches with Zooz, it replaces the powered switch only. The Ecobee lacks a major feature and is fairly pricey, but worth the money. What I mean with lacking a major feature is that it cannot be used in two-way switches. But it has motion detection and Alexa built in. If you have a mesh network and don't need the two way switching, Ecobee is where it's at.
Aggregate 4:
Aggregate 5:
I also use smart light bulbs. I have only tried Hue, so I won't get into this one. I like hue, except for when the power goes out and a few rooms in my house light up when the power comes back on. It does not remember last known state the way that many smart home devices do, but, it shouldn't because you may need light immediately after an emergency.
For whole home voice, I use Alexa, because I have built smart home devices in the Amazon ecosystem. It is considerably more difficult to build a smart home device for private use in the Google ecosystem. Check my other blog entries for more information on this.
As anyone who reads my blog knows, I built my own smart cameras, so I will post the obligatory raspberry pi link.
Aggregate 6:
My last notable device is my desktop. I went on Ebay and got some gamers old computer. It's a 4 core AM2 processor in a decent motherboard with 4GB of RAM. I Verified the RAM recently. I was amazed at what I could do with 4GB of RAM. I have an M.2 drive for storage only, because the system cannot boot from the PCI slots. I have a 128GB SSD and an ancient graphics card. It is in desperate need of replacement. Yet, I have done more with this old box than most of the developers I know do with their I7 systems. So, maybe a Chromebook flashed with Ubuntu is all you need for management?
For experimentation I have quite a few bits and pieces laying around. I have learned the hard way that buying a bunch of sensors individually or all as a single unit may not be the ideal method of experimentation. Spending 500$ on pi hats that each do one thing is ridiculous. I do have a Matrix Creator, and it has helped me understand integration of sensors and the pi, but it is a little wonky in regards to pure setup of sensors and a pi. If I were to try to learn about sensors again with my current knowledge, I would probably go with a 6 in one z-wave+ multi-sensor with a z-wave dongle. Probably using the MQTT broker.
Aggregate 7:
Matrix sensor hat, it has everything
Aggregate 8:
6 in 1 z-wave multi-sensor
Aggregate 9:
Z-wave dongle expensive edition
For experimentation I have quite a few bits and pieces laying around. I have learned the hard way that buying a bunch of sensors individually or all as a single unit may not be the ideal method of experimentation. Spending 500$ on pi hats that each do one thing is ridiculous. I do have a Matrix Creator, and it has helped me understand integration of sensors and the pi, but it is a little wonky in regards to pure setup of sensors and a pi. If I were to try to learn about sensors again with my current knowledge, I would probably go with a 6 in one z-wave+ multi-sensor with a z-wave dongle. Probably using the MQTT broker.
Aggregate 7:
Matrix sensor hat, it has everything
Aggregate 8:
6 in 1 z-wave multi-sensor
Aggregate 9:
Z-wave dongle expensive edition
Thursday, April 12, 2018
Update to MotionEye
It has been containerized and it is considerably faster. This may be because I am using a Pi3 in a case as my base unit. I have seen some stories and they did not make me feel good about the Pi layout due to processor heat. I have containerized with one of my photon instances, and the image pulled was a newer version of MotionEye, but an older version of Motion. Streaming rate increased by 10-20x. The capture rate was at least doubled.
Keep in mind that this may be a stopgap measure until I have the nerve to create a secure tunnel to a cloud in order to perform the motion capture there, then drop it into cloud storage. Let's also be honest, I will probably be going towards an Amazon cloud solution to host it on. I'm not on expert on the higher math type of programming that is required in order to make a universal cloud solution that you can simply turn on like a service, but I will put up a post explaining my poorly designed process when I get around to it.
Until then, let's get this upgrade going. For Portainer users, create a motioneye volume and motioneyelib volume. You should be able to start building the container immediately after. If you have an existing MotionEye Pi that you want to scrape the config from, hit the Backup button in the General Settings, then continue with the following Aggregate. I will validate the upgrade capability in the container over the next couple of months and either blog about how to pull an update, or that the update occurs automatically when restarting the container (if you have it set to always pull new).
Aggregate 1
Docker install instructions for MotionEye
I've edited this to make sure that you know not to upload the existing configuration from a Pi. It will break it. But, you can take a look at the tarball that you pulled from the Pi for all of your settings.
EDITED OUT --> Once installed, Restore the backup in the General Settings menu if you have one. <-- Keep in mind that you will need to add the ports for streaming as described in the Aggregate. If you want to see something really neat, if you left the old Pi instance running, turn it off and pay attention to the streaming rates on your cameras. Should add another 10-30% boost to their new streaming rate. My latest and greatest version of the Pi camera is at an average of 30fps streaming.
We will need to modify the File Storage parameters to save to persistent storage. You can create a directory in one of the previously created volumes, or you can upload. I will probably set the system to upload and be done with the local storage for anything older than a day. Another point of honesty here, if someone breaks into my house, they will probably take anything technology related. Leaving the videos on local storage is not a great idea.
Looking forward to digging around more, I just need to figure out how to rename it from a UUID to something I can toss into a DNS server. I will have some more updates in the future about best practices that I find.
Keep in mind that this may be a stopgap measure until I have the nerve to create a secure tunnel to a cloud in order to perform the motion capture there, then drop it into cloud storage. Let's also be honest, I will probably be going towards an Amazon cloud solution to host it on. I'm not on expert on the higher math type of programming that is required in order to make a universal cloud solution that you can simply turn on like a service, but I will put up a post explaining my poorly designed process when I get around to it.
Until then, let's get this upgrade going. For Portainer users, create a motioneye volume and motioneyelib volume. You should be able to start building the container immediately after. If you have an existing MotionEye Pi that you want to scrape the config from, hit the Backup button in the General Settings, then continue with the following Aggregate. I will validate the upgrade capability in the container over the next couple of months and either blog about how to pull an update, or that the update occurs automatically when restarting the container (if you have it set to always pull new).
Aggregate 1
Docker install instructions for MotionEye
I've edited this to make sure that you know not to upload the existing configuration from a Pi. It will break it. But, you can take a look at the tarball that you pulled from the Pi for all of your settings.
EDITED OUT --> Once installed, Restore the backup in the General Settings menu if you have one. <-- Keep in mind that you will need to add the ports for streaming as described in the Aggregate. If you want to see something really neat, if you left the old Pi instance running, turn it off and pay attention to the streaming rates on your cameras. Should add another 10-30% boost to their new streaming rate. My latest and greatest version of the Pi camera is at an average of 30fps streaming.
We will need to modify the File Storage parameters to save to persistent storage. You can create a directory in one of the previously created volumes, or you can upload. I will probably set the system to upload and be done with the local storage for anything older than a day. Another point of honesty here, if someone breaks into my house, they will probably take anything technology related. Leaving the videos on local storage is not a great idea.
Looking forward to digging around more, I just need to figure out how to rename it from a UUID to something I can toss into a DNS server. I will have some more updates in the future about best practices that I find.
Saturday, April 7, 2018
5 Experimental Ideas
The environment that I work in professionally involves a serious amount of computer science and engineering. There are always measurements to be made and new solutions to experiment with, but it isn't a place where you can just let things stay as a long term science experiment. This is a bit of a blessing, because it allows you to create a to-do list of things that you might want to experiment with and then prioritize it according to your own time. I do actually have a cluster that is designated to be nothing but experiments, and I only touch it about once a month.
Since I don't have too much free time to dig in and try some of the larger ideas, I started thinking about what would simplify the experiments. But I'd like to get good measurements to validate the experiments, so I will start this off with an oddball.
1. Precision Time Protocol in a container
Like much of the modern IT world today, I am all about containerization. But how do we measure internal bridge speed without having to account for the hops to physical interface, switches, routers, and PTP/NTP servers? When you are looking at tight timing requirements for software, sometimes a fraction of a microsecond determines viability. I'd love to configure PTP in a container to validate some of the other experiments that containerize existing applications.
Aggregate 1
PTP Overview
An experiment I'd like to perform to test speed and capability is to replace some multicast applications with MQTT using a broker and json publication/subscription models. This could be a serious amount of reworking on an existing tool due to the volume of multicast groups. To help review an effort to see if it is worthwhile, I'd like to see:
2. Multicast to MQTT converter
This is a longshot. The goal is to have pub/sub messages writing to a json database for review which adds a hop to the network. Multicast is great for large integrated systems, and works very well in regards to speed. MQTT without a broker lacks the command and control functionality of multicast, but it works on garbage networks. But, it can get messages across daisy chained serial cables pretty quickly. Pulling the published multicast packets and pushing them through a MQTT broker should allow review and replay of multicast to review existing systems and evaluate retooling.
Aggregate 2
I want this for every multicast messaging tool
Aggregate 3
And this study is why I want it
Another neat thing about a crazy lab environment is experiment replication. Not just replaying the experiment, but rebuilding the entire thing on a different set of hardware. I've gotten used to finding the shortest path possible to building out some of the underlying architecture. Some of these things are meant to be replaced prior to going into production, but I'm going to fill you in on a dark secret about that. Development and test can become production if time runs out on the development cycle. Things like standing up a DNS server can fall between the cracks. I use PFSense for my DNS needs in small labs, which results in a much lower risk if it slips into production. That's fine, once the environment is built. But what I really need is point and click DHCP interface to quickly build the environment.
3. Point and click DHCP configuration tool
Replicating an existing environment means I should also be able to replicate the DNS server. Being able to point and click at a MAC to modify an existing IP lease to match DNS is the goal. We really need to stop right here and realize that I am being incredibly lazy and reliant not just on a GUI, but on a feature that isn't really that hard to configure without a point and click option. I'm just interested in getting an hour or two of my month back. This actually isn't incredibly difficult to accomplish in PFSense, but the UI doesn't do it exactly how I want it to. The optimal solution would link into IPMI, show you host MAC addresses, and allow a click to configure the DHCP lease for the application network. I'll figure something out that is close enough in my spare time.
Aggregate 4
The reigning champion of quick DNS and DHCP setup
One of the tools that I rely on heavily at home is just not possible in a sandboxed environment. A ready to go Certificate Authority. An appliance that is ready to shoot out certificates based on a small amount of configuration and can reset it's root cert settings quickly would be very nice for things like management servers, registries, git repos, proxies, and application gateways. It's usually a couple hours of fumbling around with outdated SOPs to build something that is not remotely secure. A simple appliance that can be blown away in an instant without feeling bad would be nice. Minding the dev becoming production, it can't have any bloat, especially anything that would create an instant security issue, like flash or java.
4. CA Appliance
Since you cannot rely on LetsEncrypt when you don't have internet access. This would also be nice for the home user to establish SSL without reaching out to the internet. Keep everything internal without the risk of having to do crazy things like opening port 80 on your router to get your LetsEncrypt certificate. Of course, this is double edged. If you want the security of a certificate, will you trust the appliance? You still need to perform the legwork of adding the CA to all of the client systems. But it would be convenient to just have something local that can be rolled out quickly.
Aggregate 5
Like this, but without things like PKI or Java Requirements
The final idea is something that would reduce troubleshooting hours and pinpoint the majority of issues in minutes. I have been able to happily use OpenFlow as part of the VMware NSX product. You can watch the network flow between virtual machines or within the software defined network with ease. This allows you to start an application, watch it's traffic, and see where it stops. What would be really nice is having some type of flow service on a local system. Turn on the application, and from within the application execute your application or scripts. Have a nice startup message about the current state, such as "iptables are on" or "firewalld is running", then display text based on firewall/iptables status. Green for no firewall, blue for firewall.
5. Application integration to protocol analyzer
Imagine logging into your box, executing the monitoring script, then starting an application. You can already get good information on if the application is started from systemctl. Being able to see if there is some network issue downstream is important to troubleshooting. Things like Wireshark already exist, but let's go deeper. Let's find a way to tag the application in Wireshark prior to execution. Then, all of the returned capture data should be related to the application. The current tools can work, I just want them to work better. You shouldn't need to capture all network traffic to see where the application is failing. It might be time for some new tools in this arena.
Aggregate 6
I guess this will have to do, for now
While most of these ideas are for lab environments, the benefits would be wide spread. I will be putting effort into my own configurations to eliminate need for some of the tools, but some of these ideas were due to come around years ago.
Since I don't have too much free time to dig in and try some of the larger ideas, I started thinking about what would simplify the experiments. But I'd like to get good measurements to validate the experiments, so I will start this off with an oddball.
1. Precision Time Protocol in a container
Like much of the modern IT world today, I am all about containerization. But how do we measure internal bridge speed without having to account for the hops to physical interface, switches, routers, and PTP/NTP servers? When you are looking at tight timing requirements for software, sometimes a fraction of a microsecond determines viability. I'd love to configure PTP in a container to validate some of the other experiments that containerize existing applications.
Aggregate 1
PTP Overview
An experiment I'd like to perform to test speed and capability is to replace some multicast applications with MQTT using a broker and json publication/subscription models. This could be a serious amount of reworking on an existing tool due to the volume of multicast groups. To help review an effort to see if it is worthwhile, I'd like to see:
2. Multicast to MQTT converter
This is a longshot. The goal is to have pub/sub messages writing to a json database for review which adds a hop to the network. Multicast is great for large integrated systems, and works very well in regards to speed. MQTT without a broker lacks the command and control functionality of multicast, but it works on garbage networks. But, it can get messages across daisy chained serial cables pretty quickly. Pulling the published multicast packets and pushing them through a MQTT broker should allow review and replay of multicast to review existing systems and evaluate retooling.
Aggregate 2
I want this for every multicast messaging tool
Aggregate 3
And this study is why I want it
Another neat thing about a crazy lab environment is experiment replication. Not just replaying the experiment, but rebuilding the entire thing on a different set of hardware. I've gotten used to finding the shortest path possible to building out some of the underlying architecture. Some of these things are meant to be replaced prior to going into production, but I'm going to fill you in on a dark secret about that. Development and test can become production if time runs out on the development cycle. Things like standing up a DNS server can fall between the cracks. I use PFSense for my DNS needs in small labs, which results in a much lower risk if it slips into production. That's fine, once the environment is built. But what I really need is point and click DHCP interface to quickly build the environment.
3. Point and click DHCP configuration tool
Replicating an existing environment means I should also be able to replicate the DNS server. Being able to point and click at a MAC to modify an existing IP lease to match DNS is the goal. We really need to stop right here and realize that I am being incredibly lazy and reliant not just on a GUI, but on a feature that isn't really that hard to configure without a point and click option. I'm just interested in getting an hour or two of my month back. This actually isn't incredibly difficult to accomplish in PFSense, but the UI doesn't do it exactly how I want it to. The optimal solution would link into IPMI, show you host MAC addresses, and allow a click to configure the DHCP lease for the application network. I'll figure something out that is close enough in my spare time.
Aggregate 4
The reigning champion of quick DNS and DHCP setup
One of the tools that I rely on heavily at home is just not possible in a sandboxed environment. A ready to go Certificate Authority. An appliance that is ready to shoot out certificates based on a small amount of configuration and can reset it's root cert settings quickly would be very nice for things like management servers, registries, git repos, proxies, and application gateways. It's usually a couple hours of fumbling around with outdated SOPs to build something that is not remotely secure. A simple appliance that can be blown away in an instant without feeling bad would be nice. Minding the dev becoming production, it can't have any bloat, especially anything that would create an instant security issue, like flash or java.
4. CA Appliance
Since you cannot rely on LetsEncrypt when you don't have internet access. This would also be nice for the home user to establish SSL without reaching out to the internet. Keep everything internal without the risk of having to do crazy things like opening port 80 on your router to get your LetsEncrypt certificate. Of course, this is double edged. If you want the security of a certificate, will you trust the appliance? You still need to perform the legwork of adding the CA to all of the client systems. But it would be convenient to just have something local that can be rolled out quickly.
Aggregate 5
Like this, but without things like PKI or Java Requirements
The final idea is something that would reduce troubleshooting hours and pinpoint the majority of issues in minutes. I have been able to happily use OpenFlow as part of the VMware NSX product. You can watch the network flow between virtual machines or within the software defined network with ease. This allows you to start an application, watch it's traffic, and see where it stops. What would be really nice is having some type of flow service on a local system. Turn on the application, and from within the application execute your application or scripts. Have a nice startup message about the current state, such as "iptables are on" or "firewalld is running", then display text based on firewall/iptables status. Green for no firewall, blue for firewall.
5. Application integration to protocol analyzer
Imagine logging into your box, executing the monitoring script, then starting an application. You can already get good information on if the application is started from systemctl. Being able to see if there is some network issue downstream is important to troubleshooting. Things like Wireshark already exist, but let's go deeper. Let's find a way to tag the application in Wireshark prior to execution. Then, all of the returned capture data should be related to the application. The current tools can work, I just want them to work better. You shouldn't need to capture all network traffic to see where the application is failing. It might be time for some new tools in this arena.
Aggregate 6
I guess this will have to do, for now
While most of these ideas are for lab environments, the benefits would be wide spread. I will be putting effort into my own configurations to eliminate need for some of the tools, but some of these ideas were due to come around years ago.
Sunday, April 1, 2018
My gripe with some smart devices
This is actually just a space filler because I need a bunch of blogs for advertising to be able to turn on. I will lay out my gripes, but it's not going to be a how-to or educational in any way. Unless you missed any lecture on planned obsolescence, you should know the drill on a bunch of the problems.
My vacuum stopped working. A nice robotic vacuum that did the lines perfectly on my floor. It was an amazing thing to come home to a clean house without the zigzagged lines that a Roomba leaves. I enjoyed coming home to a vacuumed floor with the Roomba, but this looked like I had a maid service.
I had to retire my old Roomba due to having a kid under 10 that was into Mario. He treated the device like a bad guy and jumped on it, shattering some internal part that I did not have time or patience to fix. So, a few years later I decided to upgrade to the Neato Connected. The warranty was for 1 year, and I expected any part that was going to fail to do so in that 1 year. Nope. Almost 1 month after the anniversary, the brush could no longer turn. So I cleaned the brush and tried to clean every other piece that I could.
I have a pretty long history of taking things apart and putting them back together. Most of the time upgraded, sometimes downgraded to garbage. I actually have the time, so I figured I would take this thing apart and see what is failing. Looking online at the parts, it might cost me less than a quarter of what I paid for it to fix it myself. There aren't really enough parts to justify the price, except most of them come as assemblies rather than individual pieces, which always costs a little more.
I have a simple observation for modern engineering that I call the toilet rule. Look at the toilet, it has a bowl, a seat, a lid, and a tank. Take it apart. In the tank you have some type of handle, flapper, chain, bobber, and gasket. In the handle assembly you probably have 4-5 parts that link together the other parts, like springs and chains with hooks, but they are probably relatively inseparable. I call it the toilet rule because there are compartmentalized pieces to a toilet. To fix it, you need to know the basics of how a toilet works, and then you can fix the part that has failed.
The nice thing about the handle assembly is that it is reasonably priced. You are rightfully spending much more money on the porcelain than the other parts. And this is where I have a major problem with smart devices. I am paying a premium for software. I am not happy when hardware fails, and the cost of repairs does not take into account that the software did not fail. My robot vacuum was expensive because I could set a schedule from my smart phone, not because they used rubber that was upcycled from the game ball at the superbowl for the tires.
You want to see what failed on my robot?
That piece of plastic did not have a set of bearings behind the geared wheel. So it melted to the shaft it was spinning on. I know they developed this before fidget spinners became a thing, so maybe production of bearings were at an all time low when they engineered it. Except they have kept this same garbage design for the future models. Since it is part of a motor assembly, you have to purchase the motor, opposing gear, and belt when you want to replace it. It also includes a sensor, but guess what doesn't come with the assembly.
So, I order the assembly, which now has a motor with lower voltage, and start taking it apart so I can replace the part of the plastic casing and the plastic gear that actually broke. I use the motor, belt, opposing gear, opposing casing, and sensor from the existing part. Put it all back together like Legos, put back the few screws that hold it together, and try to fire it up. Oops, the screws on one side went too deep and I had to back them out a couple millimeters (this shouldn't be possible, if they used quality material and appropriate parts). It's faithfully cleaning my house again.
Now, let's look at what this means for the average home user. That would have been sent in for 200$ of parts, shipping, and repair labor. It was about 4$ worth of poorly engineered parts that were actually broken. This applies to most devices that have some form of planned obsolescence. Let's look at what this means for smart devices as a whole. This means that you want to buy an extended warranty that has a very low deductible for your pricier devices. Because the development cycle is fast and there are annual releases of devices, you can't count on being able to resolve an issue with a device after a few years for the 200-5000$ price point devices without getting refurbished parts. A replacement with the latest equivalent version is a reasonable expectation out of an extended warranty.
On the flipside of this, let's look at something I have experienced with MyQ garage doors. Not sure if they resolved it, but what I have experienced was a price that covered the garage door and software. A reasonable amount of money for a great capability. Except they started charging for access to their API from your smart home hub. Want Alexa support? SmartThings? Pay up. Not a huge problem, since most people will see the huge security vulnerability of integrating an entryway into their house with a voice assistant, but that should be up to the end user. The device is solid, and the parts are easily replaced with existing and future device parts, so no complaints there. But the software was already paid for with the 2.5-3x cost of the garage door. They also allowed API access to a few hubs prior to announcing that they were closing it off to others. That's just bad business. Increase the price of all of the smart garage doors by 15$. Everyone will still buy them, and you don't have to worry about people not paying the monthly fee to use it the way they want to.
To put it plainly, manufacturers are all over the map with the way they value their products. This is a problem. They should be determining cost with traditional methods on parts, and with value on software. The warranty period should be 2-3 years from the manufacturer. This is pretty basic stuff, and somehow many companies are failing at it.
My vacuum stopped working. A nice robotic vacuum that did the lines perfectly on my floor. It was an amazing thing to come home to a clean house without the zigzagged lines that a Roomba leaves. I enjoyed coming home to a vacuumed floor with the Roomba, but this looked like I had a maid service.
I had to retire my old Roomba due to having a kid under 10 that was into Mario. He treated the device like a bad guy and jumped on it, shattering some internal part that I did not have time or patience to fix. So, a few years later I decided to upgrade to the Neato Connected. The warranty was for 1 year, and I expected any part that was going to fail to do so in that 1 year. Nope. Almost 1 month after the anniversary, the brush could no longer turn. So I cleaned the brush and tried to clean every other piece that I could.
I have a pretty long history of taking things apart and putting them back together. Most of the time upgraded, sometimes downgraded to garbage. I actually have the time, so I figured I would take this thing apart and see what is failing. Looking online at the parts, it might cost me less than a quarter of what I paid for it to fix it myself. There aren't really enough parts to justify the price, except most of them come as assemblies rather than individual pieces, which always costs a little more.
I have a simple observation for modern engineering that I call the toilet rule. Look at the toilet, it has a bowl, a seat, a lid, and a tank. Take it apart. In the tank you have some type of handle, flapper, chain, bobber, and gasket. In the handle assembly you probably have 4-5 parts that link together the other parts, like springs and chains with hooks, but they are probably relatively inseparable. I call it the toilet rule because there are compartmentalized pieces to a toilet. To fix it, you need to know the basics of how a toilet works, and then you can fix the part that has failed.
The nice thing about the handle assembly is that it is reasonably priced. You are rightfully spending much more money on the porcelain than the other parts. And this is where I have a major problem with smart devices. I am paying a premium for software. I am not happy when hardware fails, and the cost of repairs does not take into account that the software did not fail. My robot vacuum was expensive because I could set a schedule from my smart phone, not because they used rubber that was upcycled from the game ball at the superbowl for the tires.
You want to see what failed on my robot?
That piece of plastic did not have a set of bearings behind the geared wheel. So it melted to the shaft it was spinning on. I know they developed this before fidget spinners became a thing, so maybe production of bearings were at an all time low when they engineered it. Except they have kept this same garbage design for the future models. Since it is part of a motor assembly, you have to purchase the motor, opposing gear, and belt when you want to replace it. It also includes a sensor, but guess what doesn't come with the assembly.
So, I order the assembly, which now has a motor with lower voltage, and start taking it apart so I can replace the part of the plastic casing and the plastic gear that actually broke. I use the motor, belt, opposing gear, opposing casing, and sensor from the existing part. Put it all back together like Legos, put back the few screws that hold it together, and try to fire it up. Oops, the screws on one side went too deep and I had to back them out a couple millimeters (this shouldn't be possible, if they used quality material and appropriate parts). It's faithfully cleaning my house again.
Now, let's look at what this means for the average home user. That would have been sent in for 200$ of parts, shipping, and repair labor. It was about 4$ worth of poorly engineered parts that were actually broken. This applies to most devices that have some form of planned obsolescence. Let's look at what this means for smart devices as a whole. This means that you want to buy an extended warranty that has a very low deductible for your pricier devices. Because the development cycle is fast and there are annual releases of devices, you can't count on being able to resolve an issue with a device after a few years for the 200-5000$ price point devices without getting refurbished parts. A replacement with the latest equivalent version is a reasonable expectation out of an extended warranty.
On the flipside of this, let's look at something I have experienced with MyQ garage doors. Not sure if they resolved it, but what I have experienced was a price that covered the garage door and software. A reasonable amount of money for a great capability. Except they started charging for access to their API from your smart home hub. Want Alexa support? SmartThings? Pay up. Not a huge problem, since most people will see the huge security vulnerability of integrating an entryway into their house with a voice assistant, but that should be up to the end user. The device is solid, and the parts are easily replaced with existing and future device parts, so no complaints there. But the software was already paid for with the 2.5-3x cost of the garage door. They also allowed API access to a few hubs prior to announcing that they were closing it off to others. That's just bad business. Increase the price of all of the smart garage doors by 15$. Everyone will still buy them, and you don't have to worry about people not paying the monthly fee to use it the way they want to.
To put it plainly, manufacturers are all over the map with the way they value their products. This is a problem. They should be determining cost with traditional methods on parts, and with value on software. The warranty period should be 2-3 years from the manufacturer. This is pretty basic stuff, and somehow many companies are failing at it.
Subscribe to:
Posts (Atom)
3d design for printing
I don't want to sound like an idiot. I really don't. I just lack the patience to learn Blender. It's not just because the nam...

-
One of the ideal outcomes of new technology is advancing automation. Setting a schedule for a device to follow and establishing triggers to...
-
The fun stuff you can do with smart home devices is generally reliant on having a smart home hub. You can set up scripts in your devices, o...
-
I don't want to sound like an idiot. I really don't. I just lack the patience to learn Blender. It's not just because the nam...