Ubuntu lists USB devices by default under /dev/ as ttyUSBx where x is a number starting 0. Issue with this is if USB devices are swapped to a different port, it’s not clear with ttyUSBx device relates to which physical USB device.
There are a few ways to change settings to point to the correct USB Device. I’ll list to 2 most common ways. The below is an examples for an RFXTrx433 USB Transceiver, but the process works the same for any other USB device.
Point the app to the device’s serial id. Run this in a terminal:
This will list all serial USB devices. Pick the one that you need to communicate with and set its address in your app, e.g.
I have long been looking for a way to add a light in my son’s bedroom to try and teach him to stay in bed until it’s morning. I’ve seen many clocks that give a different colour for sleep and it’s ok to wake up times, but they were too “childish”. Also they’re all “dumb” in that the time is not synced so I’d have to adjust the time every so often (at the very least twice a year as daylight saving changes)
I also wanted a way to be able to set a different time in the weekend.
By this stage I’m actually thinking of building a lamp with some of the LED Strips leftovers I have from past projects with a WeMos D1 Mini Pro. I ended up doing just that.
The only thing I needed was something to put the LED strip in that would look ok.
I thought the IKEA GRÖNÖ would be a perfect candidate. Plus at £6 it’s rather inexpensive.
The Arduino code is pretty much the one i used for the Conservatory LED Strips though there are only 8 LEDs to control. It’s still not finished as I want to add an ultrasonic sensor to allow anyone to turn it on without needing access to Home-Assistant, but for now, here is how it looks:
So I’m still on Ubuntu 16.04, and from version 0.65, Home-Assistant.io now needs Python 3.5.3 or later to run.
Issue is, by default Ubuntu 16.04 only goes up to Python 3.5.1.
I can update to 3.6 but ran into backwards compatibility issues. So I thought this would be a good excuse/opportunity to try Docker.
And my oh my is it awesome. No need to worry about any prerequisites, it takes care of everything that’s needed to run HA. It also means updating and downgrading HA is a breeze.
Only issue I’ve seen so far is that the python scripts that I have running to update some of HA’s devices no longer work as they use libraries that aren’t installed in HA’s docker. I looked at a few options but ended up moving them completely out of HA and having them publish on MQTT instead of interacting with HA directly.
Moving scripts to communicate via MQTT also means that should I lose internet connection, my HA instance will keep updating as I’m not attempting to connect out and back via HTTPS.
I’ve already added InfluxDB and Grafana on Docker and I use Portainer to manage it all, I’m now planning on moving other apps to docker (Plex, WordPress, OpenALPR, etc)
Finally I use docker-compose instead of docker run to launch my containers, it’s easier to manage all options and means I don’t need a separate bash file to launch each container. Here’s the container I use for HA, InfluxFB and Grafana:
I’ve recently discovered OpenALPR and I’m really impressed by it.
OpenALPR is a software that performs Automatic Licence Plate Recognition (Hence ALPR) from a video stream. The free account is quite basic as it does not offer alerts, but it still allows plate recognition and saves them as well as the actual picture for a few days.
And this is how impressive it is at recognizing plates:
OpenALPR Full Image
It appears to be quite resource hungry though (well my camera’s resolution is 3072 x 1728 !) so I’ve only got 1 camera fed through the agent now. I’m planning on trialling another agent on a raspberry pi and see how it copes…
[EDIT – 16th Aug 2018]
Turns out a Raspberry Pi would not be strong enough to cope, or so that’s what the people at OpenALPR say… 🙁
I’ve been using the excellent cloudmqtt since my early days getting to know MQTT. It’s been flawless except for about 3-4 times this month where it failed and I had to contact cloudmqtt support. Now I have to praise the support team as they did an excellent job very quickly, but the few faults made me think it was time to get myself my own broker.
In addition to going away from potential third party failures, it would allow me to keep controlling my devices should I have any internet access issues.
So I followed this page to set up mosquitto on my server.
Once done, I didn’t want to just switch everything over from cloudmqtt to mosquitto as I have quite a few devices to connect to cloudmqtt, some less accessible than others. I therefore created a bridge between my mosquitto and cloudmqtt to get the best of both worlds.
This was done by a new mosquitto config in the` /etc/mosquitto/conf.d/ directory`, I used cloudmqtt.conf (it can be any name as long as it ends with .conf so mosquitto will read it) with the following info:
I’ve recently purchased a Xiaomi Dafang Camera from GearBest. Why? Well it was cheap, 1080p resolution and has Pan-Tilt functionality as well as audio recording.
A couple of let downs however:
The camera did not offer any RTSP streaming functionality (so not possible to use motion)
The camera uploads the video feed to some server in China for the Xiaomi Mi app to retrieve
Not happy with either of the above, I started to look around for a solution and Elias Kotlyar managed to hack it to provide exactly what I was after.
I now have a camera that behaves like a “normal” CCTV camera via motion. I’ve also created a switch in Home-Assistant so as to enable motion detection video recording only when nobody’s home. Check my github page for more details.
In addition to the aforementioned standard features, Elias offered manual control of the LED so I can use this as a quick way of showing the camera status (on, off, motion recording enabled) My short term goal is to add MQTT to the camera to make it easier to control the LED and potentially the motors to pan & tilt. Longer term goal would be to get motion to control the pan-tilt functionality automatically. But that is likely to be a much more complex problem to solve, especially since motion documented that feature as “permanently at the experimental stage“…
The camera is however so good and cheap that I already bought another one 🙂