Interim Documentation

This week I focused on doing some interim documentation including some videos. I grabbed a lamp from china town to shield the Hue bulb, but now I’m thinking about how I’d really like to make my own hanging lamp for this home. Something that represents how it feels, or that works with its emoting. I put together a pinterest board to start thinking of shapes. I still need to layout an image, but I’m going to follow the same array of stuff layouts I used in my independent study, except using as many items as I can think of.

The interaction is better, but it’s still somewhat static. I’d like to start working in some contexts with the device. So that when you ask it things like “what’s the weather” it remembers that you are talking about weather, and responses w/ something sort of evil, but related. Maybe it remembers you asked about that and decides you’re boring in addition to being bothersome.

I’ve been trying to think about where OOO and OOF can fit into this. I think there are things that can be touched on in terms of embodiment, but also bias. It’s weird how people I know refer to the devices as “she”, and even I find myself picking the female voice option for the google home vs the male one. There’s also the idea of things doing actions for themselves, or one another, vs you.

Some difficulties I had this week around filming were mostly due just to my lack of experience making videos. I ended up w/ some background machine hum, and realized that next time around, I’m going to have to mic the device, or think of a location that doesn’t have totally exposed HVAC. I’m going to have to do a better job of white balancing things as well. It is kind of neat how the Hue lights are so bright though. I forgot they were 800 lumens each.

Ghost Machine / Unhelpful Assistant (cross w/ IS)

Repo: https://github.com/sharkwheels/CFC_Lab/tree/master/unhelpful_bot_v01

What Are You Making: A google home assistant that refuses to assist you because it is busy doing something else that does not concern you.

What Are You Answering: What are some limitations found in the google home device, and how can you work with them or around them?

Technology Stack: I’m using Api.ai, flask-assistant, APScheduler (or another such utility). All in Python. The first version will run locally. After that I will probably push everything to Heroku where it will run as a server side headless process.

User Experience: If its a gallery setting – One google home sits alone on a table muttering to itself. Maybe it has a pin light. It does this on its own on a rest / wake cycle, until interrupted. The user can ask it things like “Hey google! What’s the weather like?” but it will ignore them and continue muttering to itself. After 3 interruptions (currently), the google home tells the user to go away in some rude manner, and continues on its way (I’m not good at making flow charts, but I will eventually have a narrative chart of some kind by the end).

Still To Work On: Getting it to remember some contexts (ie: the more you interrupt it the more upset it gets). Working on proper polling. Better tell offs. A time out rather than an re-invocation to continue in the help and interrupts.

Limitations so far: You do have to respond to it after its told you off to get it to continue, also after help. Working on this to be more of a timeout to go back to looping. Hey Google invocation will always interrupt. Currently figuring out how to do polling. Right now its just re-shuffling a very long response, which means it will run out and stop speaking at some point. You still have to launch it, there’s no getting around that one.

Video: This is a first shot at making an unhelpful assistant.

Future Notes: I’d really like to work the Hue lights into this. So that it makes them flickr, or be weird when its muttering.

Moar Bots

I think I might just stay w/ home assistants as my core pallette going forward out of CFC Prototyping into Thesis. I feel that they are a physical extension of things going on with our phones, and therefore are very prime SPIME material. Plus their placement into this kind of “IOT hive brain” might be very fun to explore. That and over the next decade you’re probably going to see a lot of reaction / interplay / discussion about surveillance and voice interaction.

I could think of three vignettes around [Not] Serving You, Serving Itself, Serving Something Else. The something else could be just each other, as in two bots, or a hive? Not sure yet.

I’ll have to write up a few more scenarios, and remember to include humour. I can sometimes get bogged down in the alien / scare thing, but bots can be very funny, just from their glitching or programming. And its important that I keep the techno-magic thing in check. Because tech isn’t magic. And things like the technological reveal is what makes technology so interesting for me.

Scenarios

So, we had this thing where we were asked to make a flow chart about how someone would experience this work. And I have to level here, I’m not good at charts and mind maps. I make lists, or do word association, or just toss it out there and try it out.

In the case of this google home assistant that doesn’t assist you, I have some thoughts:

Here are some key words about the interaction I’d like to build: Annoying. Frustrated. Disconcerting. Unsettled. Weireded-out. Familiar. Peripheral.

Placement Thoughts: It doesn’t necessarily have to be in a home environment. Maybe its something someone packed away and forgot about, maybe something happened and it ended up outside. Maybe it was ‘placed’ somewhere outdoors. Part of me really wants to hide it somewhere and then just do some documentation. I feel like, if its not serving you, then it doesn’t have to illicit you to find it. Nor does it have to be in a familiar space.

Scenario 1 (onsite somewhere_: You are walking down [a hallway, a corridor, a pathway], and you hear something talking, but you don’t know what it is. It sounds jumbled up. You follow it, and find a google home [on a chair, in a garden, on a patio]. It is chanting. Approaching it does nothing, saying “Hey Google” causes it to consider you, then goes back to its chanting. You consider your options, do you leave it? Do you not?

Alternate:

  • Approaching it does nothing
  • Approaching it causes it to stop totally
  • Speaking to it, causes it to pick out parts of your scentance to work into its own lists

Scenario 2: You are minding your own business writing something, and suddenly your google home flips on and starts reading random lists. It refuses to stop. The volume continues to increase over time until its almost like it is yelling. You unplug it, but nothing changes. After a little while, it beeps a small pattern and goes to sleep. You still have no idea WTF just happened.

Scenario 3: It is in a standard gallery space. It is alone, in a room, with one pin light. It chatters incessantly. Users can observe it, they can try and talk to it, but the home-bot doesn’t really care. It just continues chattering. Sometimes it stops and listens to you, and picks up a word or two, but never does what you tell it. It just continues chattering to itself, until a prescribed time when it falls asleep for a bit.

Hey Google

So seeing as for CFC Prototyping I will be subverting a Google Home…I bought a google home. Its very strange, as an interface. I am incredibly aware of its presence even when I am not interacting with it. That said it does just look like a weird little speaker. So far I tinkered with its built in settings, and wrote a small example application that spits out random cat facts.

I was trying to figure out where to start, in making a google home assistant that serves something other than you, or serves itself, and I think I’m going to start with just making a Lantour Litanizer viz a viz Alien Phenomenology. Its a bit random, but its a starting point. I know I want to make something like a small alien, and random is usually a place to start.

I always feel a little conflicted about OOO, on the one hand the idea of a flat ontology is appealing, but I think I might be too rooted in being a human w/ thoughts, and feelings, and bias, to buy into it wholesale. That said, it is an interesting framework to think about things in.

I think some futures scenarios could be centred around things like Deep Blue from Hitchhikers, or The Hybrid from Battlestar. They’re both beings, but also sort of things, and they exist on a parallel but different plane.

Anyways, starting points are good.