I feel this first prototype went pretty well. It was mostly a technical exercise, and a way to explore a new platofrm and API, but I did find some interesting limitations, and was able to do some iteration.
- The home device will go to sleep if there is no user response after a few attempts.
- It doesn’t have access to a lot of google services that you think it should (ie: shared calanders, gmail, facetime, some aspects of search)
- You have to prompt it or it will just repeat itself waiting for a prompt. Its been noted in the forums that a timeout would be more graceful.
- The setup using only a phone app (and no web app equivalent), means that connecting to a hotspot on your phone is basically impossible.
- The app itself is quite clumsy.
- It doesn’t support enterprise networks (username / password). Which limits where you can use it.
- You have to hook your project into something like three google API services to be able to test it. Which is kind of madness, and confusing.
- These services do not inform you that you have a limitation on projects. Which means I was doing some tests, then hit a wall where I couldn’t do anything. Because the services keep your deleted projects for30 daysbefore removing them. I spent an hour and a half on a support call figuring this out and getting more projects added to my account.
- You can tell its very new. A lot doesn’t work. A lot is missing from its american counterpart software wise.
- It can’t do push notifications, which means it can’t for example wake up on its own and say something to you. You always have to invoke it.
Most of it was positive. I find humor is a good vehicle to sometimes get weird thoughts or scary ideas across. I did get some questions about how this device is different from say an assistant on your phone. And it really isn’t that different, EXCEPT on how you relate to it. Talking to a physical item is very different than talking to your phone. Your phone is something you are sort of used to. Also your phone can do things like push notifications, reminders, etc.
I think what worked really well is the call and response of the platform. Its built for that, but it sets up a kind of weird performance that you have to do with the google home. The framework I was using (flask-assistant) also felt familiar and I was able to do quite a few things with it in terms of setting up a program and integrating it with some other logic.
The work flow was a little strange sometimes. The documentation for the google home is very setup to guide you into making commercial apps, vs experimenting. This means I hit a lot of roadblocks on things like polling, and bypassing responses. In the end I had to “fake” some of the polling, but it still worked on the surface. I also found a lot of missing gaps in flak-assistant in terms of documentation. But that is somewhat the norm when it comes to open source software. Some of the items like “annotations” were also a bit confusing.
I think if I were to continue with this, I’d take a bit of time to map out a conversation logic tree to see where I might want things to be “remembered” or to get a better lay of the land for larger conversations. I also might want to flip over and try the Alexa for a few things. I find it might be better in terms of portability.