Digging around into stuff like TensorFlow, you might come across some statements similar to "the idea has existed since the 80's, but the technology wasn't fast enough". There are certainly drawings that can make that case, and probably some code examples. What is frustrating right now is that the hardware is finally fast enough, and the code is maturing quickly. That means that what was true two years ago is not true today.
I am most certainly not blaming the bloggers on this. I have a blog about creating an Alexa skill that can still be loosely followed today, but the Alexa skill website is completely different from when I wrote that blog. I'm pretty sure that there was a significant difference in the video requirements for security cameras to the Amazon Show and Spot since I wrote my camera blogs. You may not even need to create your own RTSP revers proxy in order to display them on your Fire TV anymore.
What I am having a problem with is the lack of decent blogs from the industry. Hands down, the best industry blog I have ever read has been William Lam with VirtuallyGhetto. Proof of knowledge and deep understanding from someone who responds to legitimate questions. That is how the industry needs to do this. Meanwhile, Amazon blogs about building cloud applications to cloud developers, but nothing for a tech savvy hobbyist.
Aggregate 1:
Virtually Ghetto
I don't want to complain about the lack of knowledge coming from the industry, the only people qualified to write about it are much better versed on it. I just wish they had some new guys that they could throw into the fire and distill the information. Especially Amazon and Nvidia. Amazon offers free cloud services that I use to turn on a TV in my house. Nvidia is the back-end hardware for my AI containers. So, when I see a blog that is incredibly well written from under a year ago, I trust the information a little too much. But it was also one of the top results in Google search.
Aggregate 2:
Very well written blog
Unfortunately Nvidia has come out with version 2. Some of the other containers in that blog are not up to date either. They are from late 2017, and were probably put out as a single working release before the author/developer got too busy. This is where the industry needs to step in. The blog should be up to date from the company, the containers should be from the the company, and the instructions should be clear. Considering how simple the instructions for install are, it is a disgraceful failure on their part that they do not have up to date documentation, especially for a company that charges for concurrent use of a GRID GPU on top of the 8k$ price tag to buy the damned thing. They have the money and resources.
So, next blog I write will bhave the date listed at least twice, give specific instructions for a basic build on specific software, and will probably have multiple links back to Nvidia with offensive tags on the links.
Wednesday, August 8, 2018
The rapidly evolving hardware of SBC
I'm not really a hardware junky, I just play one in the planning meetings. I was probably a little more excited than most people about the raspberry pi 3b+. I had no idea what it meant, but there was a footnote on a few articles about power over ethernet, or PoE, capability. This meant that I could add a pi and not have to consider power requirements to the location it served. That's the most recent example of getting excited over a feature that I actually cannot use.
I've also experienced roughly 5 power outages in my house over the past month. Not kidding. I think I brought it up when I was bragging about buying my new computer hardware. With what I have experienced at work, flash media dies quickly when you slam the power on it over and over. I've had to recover time servers, single board systems, and some other random pieces of tech due to failed flash. That's what makes the decision to get more flash based gadgets painful. I need a good way to back them up, a load of spare flash media, and some use cases that don't require 5 9s of uptime.
So, with that in mind, I'd like to share the four devices that I will purchase based on my projects. The first two are obvious, based on my blog. The Pi and the Pi Zero W. No need to dig around for numbers, the most recent are what I will use. Standardized on micro SD, and for the most part they are also standardized on power. Replacing a power block or drive requires little to no effort, and for things like security cameras, I have backups to clone from.
Aggregate 1:
Pi!
But, I am also prototyping a mechanical device. That means I want a small form factor that responds to commands rather than a credit card system with a full OS that can replace a computer from the early 2000s. Which is why I am looking into getting a few Arduino boards. Commands and responses with cheap and easy Z spectrum add-ons.
Aggregate 2:
Arduino
And finally, I want a device that I can make a single use device for a specified purpose. I mess around on my computer more often than not. I recently forced an OS upgrade that stalled due to multiple previous upgrades, failed software experiments, and a massive hardware change. I want to set up an AI system without worrying about what kind of strange stuff I am doing on my desktop. And in a couple of months, I will be able to get what will basically be a chipset made just for that purpose. It includes a CPU, GPU, and NPU (Neural-network Processing Unit).
Aggregate 3:
RockPro
So, who else has bought into features that they cannot use? And who else is looking forward to tech that they have already planned a purpose for?
I've also experienced roughly 5 power outages in my house over the past month. Not kidding. I think I brought it up when I was bragging about buying my new computer hardware. With what I have experienced at work, flash media dies quickly when you slam the power on it over and over. I've had to recover time servers, single board systems, and some other random pieces of tech due to failed flash. That's what makes the decision to get more flash based gadgets painful. I need a good way to back them up, a load of spare flash media, and some use cases that don't require 5 9s of uptime.
So, with that in mind, I'd like to share the four devices that I will purchase based on my projects. The first two are obvious, based on my blog. The Pi and the Pi Zero W. No need to dig around for numbers, the most recent are what I will use. Standardized on micro SD, and for the most part they are also standardized on power. Replacing a power block or drive requires little to no effort, and for things like security cameras, I have backups to clone from.
Aggregate 1:
Pi!
But, I am also prototyping a mechanical device. That means I want a small form factor that responds to commands rather than a credit card system with a full OS that can replace a computer from the early 2000s. Which is why I am looking into getting a few Arduino boards. Commands and responses with cheap and easy Z spectrum add-ons.
Aggregate 2:
Arduino
And finally, I want a device that I can make a single use device for a specified purpose. I mess around on my computer more often than not. I recently forced an OS upgrade that stalled due to multiple previous upgrades, failed software experiments, and a massive hardware change. I want to set up an AI system without worrying about what kind of strange stuff I am doing on my desktop. And in a couple of months, I will be able to get what will basically be a chipset made just for that purpose. It includes a CPU, GPU, and NPU (Neural-network Processing Unit).
Aggregate 3:
RockPro
So, who else has bought into features that they cannot use? And who else is looking forward to tech that they have already planned a purpose for?
Modern computing, remote execution
I'm somewhat certain that the rise of secure remote execution was only a few years ago. I've been playing around with some stuff at work that allows remote execution, but it is all locked into the same network, so it does not really answer my question. When did we get the ability to securely execute remotely? I'm sitting at my desk with a bag full of hobby electric motors and a few spare raspberry pi systems, knowing I can make these motors start spinning from anywhere in the world without exposing ports to the internet. I'm probably going to use a combination of shell scripts, nodejs, and python to do it. But this didn't just spring from the cloud.
I guess we will need to start from the beginning, the first computing systems that operated closer to a series of engineered gauges than a piece of hardware running software. Let's try to follow some paths. The very first path I want to look at is the Automatic Computing Engine (ACE). This dives down to almost a century ago. It was based off of classified work that Turing had performed, and may be the first view into how a program runs from memory.
Aggregate 1:
ACE
We will hit fast forward through some government contractor and academic nerd fights. There was the Electronic Discrete Variable Automatic Computer (EDVAC) that used binary instead of decimal. It helped lead to the Harvard Architecture that separated instruction and data in memory. This is the first steps towards modern computers, but they lacked networking and the ability to run multiple applications simultaneously. Probably the first example of using multiple computers for a single application was SAGE, using RADAR data to create a unified image. This eventually led to ARPANET, which was the birth of the internet.
Aggregate 2:
EDVAC
Aggregate 3:
SAGE
This will eventually lead to distributed computing, which has some amazing characteristics. Concurrency of components, lack of a global clock, and independent failure of components. They distribute tasking, do not have high security, and have a low tolerance for failure. Still, they were amazing when they were created. This is what I had in high school and college on the internet. Napster and online video games. About 5 years prior, Object Request Broker (ORB) becomes a mainstream technology. Issuing commands from one computer to another. You can now issue commands from your desktop without having to login to the system that will execute the commands. Created without any form of trust, of course. A long step away from our current cloud computing or domain methods. These technologies will converge.
Aggregate 4:
Distributed Computing
Aggregate 5:
ORB
I need to pause for a second to apologize for the heavy reliance on Wikipedia. The first part of this is history, the last part will be a little more cutting edge education.
Becoming more modern, we have Common Object Request Broker Architecture (CORBA). Which allows ORB to work on Windows, Linux, Mac, Unix, and BSD. You should be able to use any compliant language to issue commands to other systems. There is also The ACE ORB (TAO), but not the ACE mentioned in aggregate 1, a new ACE. We have gotten into the nested acronym portion of our evolution. This ACE is the Adaptive Communication Environment, which is a framework that can tie together advanced features of operating systems. TAO is a low level computing engine that can speak between systems to execute applications in real time. When I say real time, in this instance it means "as if it were a local command".
Aggregate 6:
TAO
Aggregate 7:
ACE part 2
As security grew and became an integrated part of computing, some components of the model shifted. Although the security was half-hearted, (security features were turned off by default in the OS) it was at least present. Kerberos ruled environments with tokens and realms. Massive SNMP configurations were used to trigger responses to events. A serious mashup of security and automation. I don't feel like linking to these technologies, and I have more to add to them. There were LDAP and AD domains that could link into a realm. SNMP was a shitty replacement for the old MQTT and DDS messaging that should have been configured in the first place. But none of these things matter as much to consumers. They wanted a way to use familiar tools to accomplish trivial tasks quickly.
Microsoft almost got us there with Simple Object Access Protocol (SOAP). I still remember checking for SOAP port activity at an old job. Messages that could control a networked application or device. This is still heavily used today, and has been a great stepping stone to what I currently use for cloud based applications. Representational State Transfer (REST). While SOAP was built to be used on the internet, REST was built of the internet. Commands on HTTP, with no need to open local firewall ports to execute fit the description of both SOAP and REST. Simple messages that execute quickly, the ability to send code rather than a simple command, and floating on top of existing protocols rather than attempting to become a protocol are what make REST the winner.
Aggregate 8:
REST
The nice thing about using HTTP to execute is that it is baked into every OS. The uniform interface of REST makes it easier to tie in any other method of code execution that can be used. Think about your smart home hubs, your lights may operate on a Zigbee spectrum using MQTT to send the message to each individual light bulb. But the message that the hub is listening to is from REST. For the time nerds, it's similar to Precision Time Protocol (PTP) compared to Network Time Protocol (NTP). REST would be the NTP that feeds the PTP that is MQTT. While I have built many devices that use standard networks running on WiFi or Ethernet, converting them to an alternative spectrum like Zwave or Zigbee woule just be a matter of adding a hub to capture the REST command and sending it to the device with MQTT.
The future will obviously hold advances on the existing systems. If we look at progress in a tick-tock fashion, I would guess that the remote execution has ticked. The tock is machine learned behavior that executes without a command, being developed and fleshed out on the cutting edge right now. The next tick will be an increase in speed and rapid reconfiguration based on integrated machine learning to further automate. The next tock is basically what people will be gambling on the stock market about. Could be micro-expression recognition to not only automate, but personalize based on observed response to an automation. In ten years, your lights might turn down because your hub knows you are hung over.
I adore comments of ridiculous speculation about the future.
I guess we will need to start from the beginning, the first computing systems that operated closer to a series of engineered gauges than a piece of hardware running software. Let's try to follow some paths. The very first path I want to look at is the Automatic Computing Engine (ACE). This dives down to almost a century ago. It was based off of classified work that Turing had performed, and may be the first view into how a program runs from memory.
Aggregate 1:
ACE
We will hit fast forward through some government contractor and academic nerd fights. There was the Electronic Discrete Variable Automatic Computer (EDVAC) that used binary instead of decimal. It helped lead to the Harvard Architecture that separated instruction and data in memory. This is the first steps towards modern computers, but they lacked networking and the ability to run multiple applications simultaneously. Probably the first example of using multiple computers for a single application was SAGE, using RADAR data to create a unified image. This eventually led to ARPANET, which was the birth of the internet.
Aggregate 2:
EDVAC
Aggregate 3:
SAGE
This will eventually lead to distributed computing, which has some amazing characteristics. Concurrency of components, lack of a global clock, and independent failure of components. They distribute tasking, do not have high security, and have a low tolerance for failure. Still, they were amazing when they were created. This is what I had in high school and college on the internet. Napster and online video games. About 5 years prior, Object Request Broker (ORB) becomes a mainstream technology. Issuing commands from one computer to another. You can now issue commands from your desktop without having to login to the system that will execute the commands. Created without any form of trust, of course. A long step away from our current cloud computing or domain methods. These technologies will converge.
Aggregate 4:
Distributed Computing
Aggregate 5:
ORB
I need to pause for a second to apologize for the heavy reliance on Wikipedia. The first part of this is history, the last part will be a little more cutting edge education.
Becoming more modern, we have Common Object Request Broker Architecture (CORBA). Which allows ORB to work on Windows, Linux, Mac, Unix, and BSD. You should be able to use any compliant language to issue commands to other systems. There is also The ACE ORB (TAO), but not the ACE mentioned in aggregate 1, a new ACE. We have gotten into the nested acronym portion of our evolution. This ACE is the Adaptive Communication Environment, which is a framework that can tie together advanced features of operating systems. TAO is a low level computing engine that can speak between systems to execute applications in real time. When I say real time, in this instance it means "as if it were a local command".
Aggregate 6:
TAO
Aggregate 7:
ACE part 2
As security grew and became an integrated part of computing, some components of the model shifted. Although the security was half-hearted, (security features were turned off by default in the OS) it was at least present. Kerberos ruled environments with tokens and realms. Massive SNMP configurations were used to trigger responses to events. A serious mashup of security and automation. I don't feel like linking to these technologies, and I have more to add to them. There were LDAP and AD domains that could link into a realm. SNMP was a shitty replacement for the old MQTT and DDS messaging that should have been configured in the first place. But none of these things matter as much to consumers. They wanted a way to use familiar tools to accomplish trivial tasks quickly.
Microsoft almost got us there with Simple Object Access Protocol (SOAP). I still remember checking for SOAP port activity at an old job. Messages that could control a networked application or device. This is still heavily used today, and has been a great stepping stone to what I currently use for cloud based applications. Representational State Transfer (REST). While SOAP was built to be used on the internet, REST was built of the internet. Commands on HTTP, with no need to open local firewall ports to execute fit the description of both SOAP and REST. Simple messages that execute quickly, the ability to send code rather than a simple command, and floating on top of existing protocols rather than attempting to become a protocol are what make REST the winner.
Aggregate 8:
REST
The nice thing about using HTTP to execute is that it is baked into every OS. The uniform interface of REST makes it easier to tie in any other method of code execution that can be used. Think about your smart home hubs, your lights may operate on a Zigbee spectrum using MQTT to send the message to each individual light bulb. But the message that the hub is listening to is from REST. For the time nerds, it's similar to Precision Time Protocol (PTP) compared to Network Time Protocol (NTP). REST would be the NTP that feeds the PTP that is MQTT. While I have built many devices that use standard networks running on WiFi or Ethernet, converting them to an alternative spectrum like Zwave or Zigbee woule just be a matter of adding a hub to capture the REST command and sending it to the device with MQTT.
The future will obviously hold advances on the existing systems. If we look at progress in a tick-tock fashion, I would guess that the remote execution has ticked. The tock is machine learned behavior that executes without a command, being developed and fleshed out on the cutting edge right now. The next tick will be an increase in speed and rapid reconfiguration based on integrated machine learning to further automate. The next tock is basically what people will be gambling on the stock market about. Could be micro-expression recognition to not only automate, but personalize based on observed response to an automation. In ten years, your lights might turn down because your hub knows you are hung over.
I adore comments of ridiculous speculation about the future.
Subscribe to:
Posts (Atom)
3d design for printing
I don't want to sound like an idiot. I really don't. I just lack the patience to learn Blender. It's not just because the nam...

-
One of the ideal outcomes of new technology is advancing automation. Setting a schedule for a device to follow and establishing triggers to...
-
The fun stuff you can do with smart home devices is generally reliant on having a smart home hub. You can set up scripts in your devices, o...
-
I don't want to sound like an idiot. I really don't. I just lack the patience to learn Blender. It's not just because the nam...