Interm Documentation (2)

So. Here’s a few things. Some small documentation for Calendar Creep and Parrot. I think I’d like to develop Parrot into a bot only back and forth broken telephone thing. Just to see what they get from one another.

Things missing from google home’s life

They don’t have eyes. They don’t have hands. They don’t have a sense of touch. They don’t have the ability to move around. They can’t drive. They can’t drink. They can’t play really interesting dating sims. They can’t eat. They don’t need to sleep. They can’t pet the dog, or feed the fish. They don’t defecate. They don’t get sleepy. They don’t have a favourite movie.

Or do they?

What are some of the things Alexa and Home CAN’T do on their own?

Ghost Machine / Unhelpful Assistant (cross w/ IS)


What Are You Making: A google home assistant that refuses to assist you because it is busy doing something else that does not concern you.

What Are You Answering: What are some limitations found in the google home device, and how can you work with them or around them?

Technology Stack: I’m using, flask-assistant, APScheduler (or another such utility). All in Python. The first version will run locally. After that I will probably push everything to Heroku where it will run as a server side headless process.

User Experience: If its a gallery setting – One google home sits alone on a table muttering to itself. Maybe it has a pin light. It does this on its own on a rest / wake cycle, until interrupted. The user can ask it things like “Hey google! What’s the weather like?” but it will ignore them and continue muttering to itself. After 3 interruptions (currently), the google home tells the user to go away in some rude manner, and continues on its way (I’m not good at making flow charts, but I will eventually have a narrative chart of some kind by the end).

Still To Work On: Getting it to remember some contexts (ie: the more you interrupt it the more upset it gets). Working on proper polling. Better tell offs. A time out rather than an re-invocation to continue in the help and interrupts.

Limitations so far: You do have to respond to it after its told you off to get it to continue, also after help. Working on this to be more of a timeout to go back to looping. Hey Google invocation will always interrupt. Currently figuring out how to do polling. Right now its just re-shuffling a very long response, which means it will run out and stop speaking at some point. You still have to launch it, there’s no getting around that one.

Video: This is a first shot at making an unhelpful assistant.

Future Notes: I’d really like to work the Hue lights into this. So that it makes them flickr, or be weird when its muttering.

Moar Bots

I think I might just stay w/ home assistants as my core pallette going forward out of CFC Prototyping into Thesis. I feel that they are a physical extension of things going on with our phones, and therefore are very prime SPIME material. Plus their placement into this kind of “IOT hive brain” might be very fun to explore. That and over the next decade you’re probably going to see a lot of reaction / interplay / discussion about surveillance and voice interaction.

I could think of three vignettes around [Not] Serving You, Serving Itself, Serving Something Else. The something else could be just each other, as in two bots, or a hive? Not sure yet.

I’ll have to write up a few more scenarios, and remember to include humour. I can sometimes get bogged down in the alien / scare thing, but bots can be very funny, just from their glitching or programming. And its important that I keep the techno-magic thing in check. Because tech isn’t magic. And things like the technological reveal is what makes technology so interesting for me.

Hey Google

So seeing as for CFC Prototyping I will be subverting a Google Home…I bought a google home. Its very strange, as an interface. I am incredibly aware of its presence even when I am not interacting with it. That said it does just look like a weird little speaker. So far I tinkered with its built in settings, and wrote a small example application that spits out random cat facts.

I was trying to figure out where to start, in making a google home assistant that serves something other than you, or serves itself, and I think I’m going to start with just making a Lantour Litanizer viz a viz Alien Phenomenology. Its a bit random, but its a starting point. I know I want to make something like a small alien, and random is usually a place to start.

I always feel a little conflicted about OOO, on the one hand the idea of a flat ontology is appealing, but I think I might be too rooted in being a human w/ thoughts, and feelings, and bias, to buy into it wholesale. That said, it is an interesting framework to think about things in.

I think some futures scenarios could be centred around things like Deep Blue from Hitchhikers, or The Hybrid from Battlestar. They’re both beings, but also sort of things, and they exist on a parallel but different plane.

Anyways, starting points are good.

IAMD – Chance

Chance was my favourite project. Mainly because I decided to make a markov-tumblr-bot based off of Freud’s Interpretation of Dreams. I really like making bots. I find there’s something fun, and reflective about making a chatter box internet thing.

In this case. I decided to make a bot that not only remixed a text, but also paired up remixes with images it found based on search criteria it pulled from its remixed text. The results range from utterly random, to just down right meta. You can see the whole thing chugging along here: And the code repo is here:

Here are a few favourites:

Its neat to see what this bot comes up with every hour. The program itself isn’t that complicated, but it doesn’t have to be complicated to be interesting.

Also there is some weird shit on Bing.