User Flow

Basic flow for user interaction with a google home assistant that won’t assist you.

Reflection

I feel this first prototype went pretty well. It was mostly a technical exercise, and a way to explore a new platofrm and API, but I did find some interesting limitations, and was able to do some iteration.

Limitations:

  • The home device will go to sleep if there is no user response after a few attempts.
  • It doesn’t have access to a lot of google services that you think it should (ie: shared calanders, gmail, facetime, some aspects of search)
  • You have to prompt it or it will just repeat itself waiting for a prompt. Its been noted in the forums that a timeout would be more graceful.
  • The setup using only a phone app (and no web app equivalent), means that connecting to a hotspot on your phone is basically impossible.
  • The app itself is quite clumsy.
  • It doesn’t support enterprise networks (username / password). Which limits where you can use it.
  • You have to hook your project into something like three google API services to be able to test it. Which is kind of madness, and confusing.
  • These services do not inform you that you have a limitation on projects. Which means I was doing some tests, then hit a wall where I couldn’t do anything. Because the services keep your deleted projects for30 daysbefore removing them. I spent an hour and a half on a support call figuring this out and getting more projects added to my account.
  • You can tell its very new. A lot doesn’t work. A lot is missing from its american counterpart software wise.
  • It can’t do push notifications, which means it can’t for example wake up on its own and say something to you. You always have to invoke it.

Presentation Feedback
Most of it was positive. I find humor is a good vehicle to sometimes get weird thoughts or scary ideas across. I did get some questions about how this device is different from say an assistant on your phone. And it really isn’t that different, EXCEPT on how you relate to it. Talking to a physical item is very different than talking to your phone. Your phone is something you are sort of used to. Also your phone can do things like push notifications, reminders, etc.

Workflow Reflections
I think what worked really well is the call and response of the platform. Its built for that, but it sets up a kind of weird performance that you have to do with the google home. The framework I was using (flask-assistant) also felt familiar and I was able to do quite a few things with it in terms of setting up a program and integrating it with some other logic.

The work flow was a little strange sometimes. The documentation for the google home is very setup to guide you into making commercial apps, vs experimenting. This means I hit a lot of roadblocks on things like polling, and bypassing responses. In the end I had to “fake” some of the polling, but it still worked on the surface. I also found a lot of missing gaps in flak-assistant in terms of documentation. But that is somewhat the norm when it comes to open source software. Some of the items like “annotations” were also a bit confusing.

Future Thoughts
I think if I were to continue with this, I’d take a bit of time to map out a conversation logic tree to see where I might want things to be “remembered” or to get a better lay of the land for larger conversations. I also might want to flip over and try the Alexa for a few things. I find it might be better in terms of portability.

Project Road Map

I can’t really answer this one at this point. Right now. I’d just like to have something nailed down for fall. I should have all my experiments with my independent study done by the end of summer, this should give me a wide scattershot to think about what I could put together. I know as a rough thing, I’d like to have all my prototyping done sometime around January. But right now? I’m only thinking about summer, and August goes like this

1) Finish Google home one shot for CFC
2) make a list of some limitations and thoughts on how it went
3) Go back to my independent study and finish my one item a week until August 25th
4) Meet w/ PA after August 25th and boil down what was interesting, what wasn’t and what worked.
5) Figure out maybe a final item to explore, or some vignettes to consider
6) Start reading Design for Living with Smart Products and make some better notes for Alien Phenomenology
7) Compile written things into one google drive so I can find stuff easier when its done.
8) Set up an instagram to record work and research stuff in

So that’s my august road map.

Testing Plan

So. I found it a bit difficult to do testing and testing plans. Mainly because I really want to make something that doesn’t have people at the center of its user base. Also because of some network issues I haven’t been able to test anything with the Google Home on campus until basically the end of this course. And because I don’t have my REB for user testing yet, testing on friends isn’t something I can do.

I guess my test plan is sort of open ended. Because its not really centered in the end towards users. So in this case, something like: Just spend a few minutes asking google home some questions while its running, seems to be a thing that would work.

I guess for myself my test plan is asking:

  • Did that block of code work?
  • What is working smoothly?
  • Did it crash?
  • If so, why?
  • Is there a way this code can be improved?
  • Am I getting the wanted output from this program?
  • What am I “fudging”?
  • Is there a better way to do [X]?
  • What were some of the limitations of this action?
  • What are some of the differences between the online simulation and the actual device?

Tho I feel writing some actual software tests is something I need to learn. I already de-bug, and log, but tests also makes sense.

As for feedback, I did make a google form for feedback for this class, but never got to use it due to the mentioned network issues.

  • Email Address
  • Name
  • Age
  • Have you ever used a digital assistant before?
  • If “yes” which one? (google assistant, amazon alexa, siri, “other”)
  • How often do you use your chosen home or digital assistant?
  • Hardly ever -> all the time (scale of 1 to 5)
  • What are some things you use your digital assistant for?
  • How did talking to this bot make you feel?
  • What does this interaction remind you of?
  • What did you like about it?
  • What did you dislike?
  • Where did you get stuck in the conversation?
  • What are some things you would change?
  • On a scale of 1-4 how would you rate this interaction?
  • Boring -> interesting
  • Any Other comments

Things missing from google home’s life

They don’t have eyes. They don’t have hands. They don’t have a sense of touch. They don’t have the ability to move around. They can’t drive. They can’t drink. They can’t play really interesting dating sims. They can’t eat. They don’t need to sleep. They can’t pet the dog, or feed the fish. They don’t defecate. They don’t get sleepy. They don’t have a favourite movie.

Or do they?

What are some of the things Alexa and Home CAN’T do on their own?

Ghost Machine / Unhelpful Assistant (cross w/ IS)

Repo: https://github.com/sharkwheels/CFC_Lab/tree/master/unhelpful_bot_v01

What Are You Making: A google home assistant that refuses to assist you because it is busy doing something else that does not concern you.

What Are You Answering: What are some limitations found in the google home device, and how can you work with them or around them?

Technology Stack: I’m using Api.ai, flask-assistant, APScheduler (or another such utility). All in Python. The first version will run locally. After that I will probably push everything to Heroku where it will run as a server side headless process.

User Experience: If its a gallery setting – One google home sits alone on a table muttering to itself. Maybe it has a pin light. It does this on its own on a rest / wake cycle, until interrupted. The user can ask it things like “Hey google! What’s the weather like?” but it will ignore them and continue muttering to itself. After 3 interruptions (currently), the google home tells the user to go away in some rude manner, and continues on its way (I’m not good at making flow charts, but I will eventually have a narrative chart of some kind by the end).

Still To Work On: Getting it to remember some contexts (ie: the more you interrupt it the more upset it gets). Working on proper polling. Better tell offs. A time out rather than an re-invocation to continue in the help and interrupts.

Limitations so far: You do have to respond to it after its told you off to get it to continue, also after help. Working on this to be more of a timeout to go back to looping. Hey Google invocation will always interrupt. Currently figuring out how to do polling. Right now its just re-shuffling a very long response, which means it will run out and stop speaking at some point. You still have to launch it, there’s no getting around that one.

Video: This is a first shot at making an unhelpful assistant.

Future Notes: I’d really like to work the Hue lights into this. So that it makes them flickr, or be weird when its muttering.

Interlude

WTF is up w/ things on the same LAN designed to route themselves through a remote site / host? Like, come on IoT engineers, it doesn’t always have to go to my butt. USE THE LAN. This was one reason I was super into the idea of Hue Lights, they operate on the Zigbee protocol, and can be used w/out routing through meethue.com

But if you want to say….hook a google home or an alexa or anything else up to it, you have to route it through the remote website architecture. Which is super weird and pretty pointless, considering its two piece of hardware sitting on the same LAN, in the same space. Yesterday I spent a good amount of time scratching my head trying to get all these things to work, until they just did. Why did it work? I don’t know, and that concerns me.

Talk about infinite yak box potential.

Futures Thinking

I would say that when it comes to things like the google home and alexa there’s definitely a cross into the worlds of surveillance / privacy / agency. If futures thinking is the idea of looking at something that intersects what is Plausible and what is Probable, vs. fantasy, then I would think scenarios around the IoT that cross over into aspects of machine learning is where this cone can be aimed at.

Right now the IoT is either a fancy set of buttons, or notifies. But it does have the ability to move into SPIME-dom, its like a new iteration of proto-SPIME. The concept of something akin to a Hyperobject that is tracked over its lifetime, but can also become a physical incarnation of its journey and data. I think in the next 5 years, you’re going to see IoT items gain more of a sense of autonomy in terms of their capability through learning, an networking. Without going into total fantasy of Rise of The Machines, think more of an object that can predict, or consider, or play off things. Again its that idea of something that partially materializes in the world for you, but also has a foot in the world without you.

Moar Bots

I think I might just stay w/ home assistants as my core pallette going forward out of CFC Prototyping into Thesis. I feel that they are a physical extension of things going on with our phones, and therefore are very prime SPIME material. Plus their placement into this kind of “IOT hive brain” might be very fun to explore. That and over the next decade you’re probably going to see a lot of reaction / interplay / discussion about surveillance and voice interaction.

I could think of three vignettes around [Not] Serving You, Serving Itself, Serving Something Else. The something else could be just each other, as in two bots, or a hive? Not sure yet.

I’ll have to write up a few more scenarios, and remember to include humour. I can sometimes get bogged down in the alien / scare thing, but bots can be very funny, just from their glitching or programming. And its important that I keep the techno-magic thing in check. Because tech isn’t magic. And things like the technological reveal is what makes technology so interesting for me.

Scenarios

So, we had this thing where we were asked to make a flow chart about how someone would experience this work. And I have to level here, I’m not good at charts and mind maps. I make lists, or do word association, or just toss it out there and try it out.

In the case of this google home assistant that doesn’t assist you, I have some thoughts:

Here are some key words about the interaction I’d like to build: Annoying. Frustrated. Disconcerting. Unsettled. Weireded-out. Familiar. Peripheral.

Placement Thoughts: It doesn’t necessarily have to be in a home environment. Maybe its something someone packed away and forgot about, maybe something happened and it ended up outside. Maybe it was ‘placed’ somewhere outdoors. Part of me really wants to hide it somewhere and then just do some documentation. I feel like, if its not serving you, then it doesn’t have to illicit you to find it. Nor does it have to be in a familiar space.

Scenario 1 (onsite somewhere_: You are walking down [a hallway, a corridor, a pathway], and you hear something talking, but you don’t know what it is. It sounds jumbled up. You follow it, and find a google home [on a chair, in a garden, on a patio]. It is chanting. Approaching it does nothing, saying “Hey Google” causes it to consider you, then goes back to its chanting. You consider your options, do you leave it? Do you not?

Alternate:

  • Approaching it does nothing
  • Approaching it causes it to stop totally
  • Speaking to it, causes it to pick out parts of your scentance to work into its own lists

Scenario 2: You are minding your own business writing something, and suddenly your google home flips on and starts reading random lists. It refuses to stop. The volume continues to increase over time until its almost like it is yelling. You unplug it, but nothing changes. After a little while, it beeps a small pattern and goes to sleep. You still have no idea WTF just happened.

Scenario 3: It is in a standard gallery space. It is alone, in a room, with one pin light. It chatters incessantly. Users can observe it, they can try and talk to it, but the home-bot doesn’t really care. It just continues chattering. Sometimes it stops and listens to you, and picks up a word or two, but never does what you tell it. It just continues chattering to itself, until a prescribed time when it falls asleep for a bit.