Project Road Map

I can’t really answer this one at this point. Right now. I’d just like to have something nailed down for fall. I should have all my experiments with my independent study done by the end of summer, this should give me a wide scattershot to think about what I could put together. I know as a rough thing, I’d like to have all my prototyping done sometime around January. But right now? I’m only thinking about summer, and August goes like this

1) Finish Google home one shot for CFC
2) make a list of some limitations and thoughts on how it went
3) Go back to my independent study and finish my one item a week until August 25th
4) Meet w/ PA after August 25th and boil down what was interesting, what wasn’t and what worked.
5) Figure out maybe a final item to explore, or some vignettes to consider
6) Start reading Design for Living with Smart Products and make some better notes for Alien Phenomenology
7) Compile written things into one google drive so I can find stuff easier when its done.
8) Set up an instagram to record work and research stuff in

So that’s my august road map.

Things missing from google home’s life

They don’t have eyes. They don’t have hands. They don’t have a sense of touch. They don’t have the ability to move around. They can’t drive. They can’t drink. They can’t play really interesting dating sims. They can’t eat. They don’t need to sleep. They can’t pet the dog, or feed the fish. They don’t defecate. They don’t get sleepy. They don’t have a favourite movie.

Or do they?

What are some of the things Alexa and Home CAN’T do on their own?

Ghost Machine / Unhelpful Assistant (cross w/ IS)

Repo: https://github.com/sharkwheels/CFC_Lab/tree/master/unhelpful_bot_v01

What Are You Making: A google home assistant that refuses to assist you because it is busy doing something else that does not concern you.

What Are You Answering: What are some limitations found in the google home device, and how can you work with them or around them?

Technology Stack: I’m using Api.ai, flask-assistant, APScheduler (or another such utility). All in Python. The first version will run locally. After that I will probably push everything to Heroku where it will run as a server side headless process.

User Experience: If its a gallery setting – One google home sits alone on a table muttering to itself. Maybe it has a pin light. It does this on its own on a rest / wake cycle, until interrupted. The user can ask it things like “Hey google! What’s the weather like?” but it will ignore them and continue muttering to itself. After 3 interruptions (currently), the google home tells the user to go away in some rude manner, and continues on its way (I’m not good at making flow charts, but I will eventually have a narrative chart of some kind by the end).

Still To Work On: Getting it to remember some contexts (ie: the more you interrupt it the more upset it gets). Working on proper polling. Better tell offs. A time out rather than an re-invocation to continue in the help and interrupts.

Limitations so far: You do have to respond to it after its told you off to get it to continue, also after help. Working on this to be more of a timeout to go back to looping. Hey Google invocation will always interrupt. Currently figuring out how to do polling. Right now its just re-shuffling a very long response, which means it will run out and stop speaking at some point. You still have to launch it, there’s no getting around that one.

Video: This is a first shot at making an unhelpful assistant.

Future Notes: I’d really like to work the Hue lights into this. So that it makes them flickr, or be weird when its muttering.

Interlude

WTF is up w/ things on the same LAN designed to route themselves through a remote site / host? Like, come on IoT engineers, it doesn’t always have to go to my butt. USE THE LAN. This was one reason I was super into the idea of Hue Lights, they operate on the Zigbee protocol, and can be used w/out routing through meethue.com

But if you want to say….hook a google home or an alexa or anything else up to it, you have to route it through the remote website architecture. Which is super weird and pretty pointless, considering its two piece of hardware sitting on the same LAN, in the same space. Yesterday I spent a good amount of time scratching my head trying to get all these things to work, until they just did. Why did it work? I don’t know, and that concerns me.

Talk about infinite yak box potential.

Futures Thinking

I would say that when it comes to things like the google home and alexa there’s definitely a cross into the worlds of surveillance / privacy / agency. If futures thinking is the idea of looking at something that intersects what is Plausible and what is Probable, vs. fantasy, then I would think scenarios around the IoT that cross over into aspects of machine learning is where this cone can be aimed at.

Right now the IoT is either a fancy set of buttons, or notifies. But it does have the ability to move into SPIME-dom, its like a new iteration of proto-SPIME. The concept of something akin to a Hyperobject that is tracked over its lifetime, but can also become a physical incarnation of its journey and data. I think in the next 5 years, you’re going to see IoT items gain more of a sense of autonomy in terms of their capability through learning, an networking. Without going into total fantasy of Rise of The Machines, think more of an object that can predict, or consider, or play off things. Again its that idea of something that partially materializes in the world for you, but also has a foot in the world without you.

Moar Bots

I think I might just stay w/ home assistants as my core pallette going forward out of CFC Prototyping into Thesis. I feel that they are a physical extension of things going on with our phones, and therefore are very prime SPIME material. Plus their placement into this kind of “IOT hive brain” might be very fun to explore. That and over the next decade you’re probably going to see a lot of reaction / interplay / discussion about surveillance and voice interaction.

I could think of three vignettes around [Not] Serving You, Serving Itself, Serving Something Else. The something else could be just each other, as in two bots, or a hive? Not sure yet.

I’ll have to write up a few more scenarios, and remember to include humour. I can sometimes get bogged down in the alien / scare thing, but bots can be very funny, just from their glitching or programming. And its important that I keep the techno-magic thing in check. Because tech isn’t magic. And things like the technological reveal is what makes technology so interesting for me.

Scenarios

So, we had this thing where we were asked to make a flow chart about how someone would experience this work. And I have to level here, I’m not good at charts and mind maps. I make lists, or do word association, or just toss it out there and try it out.

In the case of this google home assistant that doesn’t assist you, I have some thoughts:

Here are some key words about the interaction I’d like to build: Annoying. Frustrated. Disconcerting. Unsettled. Weireded-out. Familiar. Peripheral.

Placement Thoughts: It doesn’t necessarily have to be in a home environment. Maybe its something someone packed away and forgot about, maybe something happened and it ended up outside. Maybe it was ‘placed’ somewhere outdoors. Part of me really wants to hide it somewhere and then just do some documentation. I feel like, if its not serving you, then it doesn’t have to illicit you to find it. Nor does it have to be in a familiar space.

Scenario 1 (onsite somewhere_: You are walking down [a hallway, a corridor, a pathway], and you hear something talking, but you don’t know what it is. It sounds jumbled up. You follow it, and find a google home [on a chair, in a garden, on a patio]. It is chanting. Approaching it does nothing, saying “Hey Google” causes it to consider you, then goes back to its chanting. You consider your options, do you leave it? Do you not?

Alternate:

  • Approaching it does nothing
  • Approaching it causes it to stop totally
  • Speaking to it, causes it to pick out parts of your scentance to work into its own lists

Scenario 2: You are minding your own business writing something, and suddenly your google home flips on and starts reading random lists. It refuses to stop. The volume continues to increase over time until its almost like it is yelling. You unplug it, but nothing changes. After a little while, it beeps a small pattern and goes to sleep. You still have no idea WTF just happened.

Scenario 3: It is in a standard gallery space. It is alone, in a room, with one pin light. It chatters incessantly. Users can observe it, they can try and talk to it, but the home-bot doesn’t really care. It just continues chattering. Sometimes it stops and listens to you, and picks up a word or two, but never does what you tell it. It just continues chattering to itself, until a prescribed time when it falls asleep for a bit.

Hey Google

So seeing as for CFC Prototyping I will be subverting a Google Home…I bought a google home. Its very strange, as an interface. I am incredibly aware of its presence even when I am not interacting with it. That said it does just look like a weird little speaker. So far I tinkered with its built in settings, and wrote a small example application that spits out random cat facts.

I was trying to figure out where to start, in making a google home assistant that serves something other than you, or serves itself, and I think I’m going to start with just making a Lantour Litanizer viz a viz Alien Phenomenology. Its a bit random, but its a starting point. I know I want to make something like a small alien, and random is usually a place to start.

I always feel a little conflicted about OOO, on the one hand the idea of a flat ontology is appealing, but I think I might be too rooted in being a human w/ thoughts, and feelings, and bias, to buy into it wholesale. That said, it is an interesting framework to think about things in.

I think some futures scenarios could be centred around things like Deep Blue from Hitchhikers, or The Hybrid from Battlestar. They’re both beings, but also sort of things, and they exist on a parallel but different plane.

Anyways, starting points are good.