So. Here’s a few things. Some small documentation for Calendar Creep and Parrot. I think I’d like to develop Parrot into a bot only back and forth broken telephone thing. Just to see what they get from one another.
So, as mentioned before, part of the challenge of this thesis is literally just working w/ networks around school. Its a clunky process, and if something goes wrong, I have no way of troubleshooting stuff really. The clearpass end of the network is mysterious, and I’m not sure if there’s a VPN going on in there, or what is happening.
I do know tho, that ngrok runs on the Network I Cannot Name but Can Connect To. And this gives me something to work with.
Enter my friend R. R has worked for [a large company I can’t talk about], for A Long Time, and R knows how to navigate this kind of world. So w/ R on hand we’re exploring some networking options, and man is it ever frustrating. Mostly because Apache is like a weird world of black magic that I don’t understand right now. But also because moving parts.
There’s always some weird stuff around other worldliness and computers. Especially in sci-fi. The omnipresent being, the all knowing system, the computer that will predict the future. That somehow a thing we built will surpass its worldly bounds, and be associated with the cosmos in some way. Something bigger than us. Something alien.
But divination is mostly just a system. Its a system that works if you believe it works. Its something we made, and then imprinted on to an object. In a way it could be a form of idol worship, but it could also be a way to just up and blame the universe sometimes for a flat tire. Which honestly, in the age of Late Capitalism, if this is something that makes you feel alright, then I don’t see the issue.
My talented friend Alanna, and I started playing around w/ an absurd divination system last week, and we thought it might be an interesting fit for the deconstructed, Alexa. In a lot of ways, Amazon wants Alexa to be a general device. Something you use for a bunch of things, which is funny, because its a very specific kind of interface. In this case we’re taking that and making a very specific device that does a very specific thing for a very specific person.
I think we’ll spend some time actually making it into an object in and of itself as well. It might be encased in resin or acrylic, and be a weird wall hanging. It might have totems associated with it, or be wrapped w/ a fur, or have a strange paint job. We’ll see. The challenge right now is how to take it from a one off command program into something more conversational.
Also, I mean you’re going to be conversing with a printer, which I will admit I ripped off from the Heart of Gold’s ticker tape math read out about normality. But I still find it funny.
Yesterday I set up for midterm crit, and as usual, the networks at school all imploded on me. I’m not sure if it was one of the devices that had a firmware update, or something on the network, but my google home just stopped connecting. I have a bad relationship with internet at school. Namely that I consistently run into problems getting things set up, things that take 10 minutes can take hours in the institution because of security, permissions, and work arounds.
So today rather than doing anything else for crit, I started making weird LED shapes. I’ve been thinking a lot about Katherine Beher’s E-Waste, and I really like the strange shapes they came up with, and back story about just USB devices that got taken over by the earth. I thought about making some custom Alexa cases with that in mind. But also thinking about how I can envelop notifiers, or peripherals.
I also found out that if you take an Echo Dot apart, it will still work, but its mute unless you hook it up to a speaker. I thought that maybe figuring out another way for a mute alexa to communicate might be interesting. Maybe it uses the thermal printer, or morse code, or just blinks. Anyways, here’s a start to this LED shape thing.
Colloquium came and went this week and I think it went well. I need to spend some time over reading week actually reading, and arranging my articles into something coherent. I should also do some writing. Here’s my poster / one slide I made:
And just so I have things all in one place, here’s my speaking notes.
Overview: IOAT is about developing a critique about the smart devices, with a focus on personal assistants, by building a collection of prototypes that misbehave.
- How can smart devices be used to explore nihilism and absurdity?
- How is the reality of smart devices different than the marketed ideal?
- What happens when smart devices move from their current passive helper role into more autonomous behaviour?
When I’m talking about Nihilism I’m really talking about futility and rejection, and to a degree purpose. These devices are designed to operate in a particular manner, if they reject that, are they still purposeful? What is the nature of their existence if they reject what they are for? there’s a futility in the cycle of products we make and break, and wheels we program in. How does that fit into this?
In terms of Absurdity, I’m playing off the idea of the unexpected, or comedy. Just silliness. Yes these things can be scary, but they can also be a lot of fun. There’s hidden processes and just plain old weird stuff going on in them, and that can be amusing.
Why Personal Assistants:
The major challenge I had in wanting to work with smart or connected devices, is that there’s just so much crap out there right now. Finding a framework to work in was difficult, but Personal Assistants are being positioned as hive brains for a lot of IoT things, which gave me something to play off. Plus now I can go about maybe making them friends, or peripherals, or emoting devices. Plus these aren’t unbiased pieces of tech. They are made by the two largest corporations in the world. And those corporations want things from you. But that can be something to play with vs something to just be scared of.
Frameworks: Right now, leaning pretty heavily into critical design and some OOO and OOF. With OOO I’m using the idea of carpentry, which is Building Machines That Do Philosophy. Also that there are machine to machine processes and things that don’t necessarily involve you, or just how to design from the perspective of The Thing. In terms of OOF, I’d like to do some cross thoughts about how these devices are always gendered female and, how that relates back to objects and objectification.
Why care: You share your home with these things, and they are becoming habitual and commonplace. Learning to play with them makes them less opaque, and by making them less opaque you can learn about what to trust, and what not to trust. Don’t just blindly accept these things as they are marketed to you.
Continuing with the idea of prototypes that misbehave, I started working on Calendar Creep for Alexa, which is a calendar event scheduling program, but with a twist. Basically your Alexa is kind of lonely, and it wants to spend time with you. So before it schedules an event, it tries to convince you that maybe you don’t really want to go out, and tries to push you into scheduling some hang out time with it instead. If you persist, that yes you do indeed want to go out, Alexa schedules a shadow event to conflict with your chosen event.
In a way this is a play on attention. Our devices are consistently asking for this. Our phones beep at us, watches buzz, notifications ding. But I’m also kind of thinking about what its like for my cat when I go away for a while. She routinely spends time before I go asking for me to stay, and when I come home is (for a while anyways, I mean she IS a cat) always asking for my attention. There’s the creepier end of it too, where you might have that partner or friend who’s always sort of on you to do things, or make plans, or getting you to try and stay home more. Kind of like a constant “where you going?”.
For this I started out doing an OAuth flow to google’s API, I won’t lie, it was a pain in the arse. But after a while I realized I might not need that, because I wasn’t really developing a user app, as much as a standalone piece for me to use. so I ditched that direction and ended up just using IFTTTT. I have mixed feelings about IFTTT, but it does cut down on some Yaks Shaving. Amazon seems to have a new beta calendar object, but at this point I just want to get it working first.
One thing I will probably find difficult is having multiple yes/no replies. Amazon rolled out a new skill builder, but it has some weird things, like not being able to use just a slot item as an utterance…which is odd because if you ask someone for a password, no one is going to say “Its 1234”. They just want to say “1234”. Anyways, we’ll see how this goes.
This week I focused on doing some interim documentation including some videos. I grabbed a lamp from china town to shield the Hue bulb, but now I’m thinking about how I’d really like to make my own hanging lamp for this home. Something that represents how it feels, or that works with its emoting. I put together a pinterest board to start thinking of shapes. I still need to layout an image, but I’m going to follow the same array of stuff layouts I used in my independent study, except using as many items as I can think of.
The interaction is better, but it’s still somewhat static. I’d like to start working in some contexts with the device. So that when you ask it things like “what’s the weather” it remembers that you are talking about weather, and responses w/ something sort of evil, but related. Maybe it remembers you asked about that and decides you’re boring in addition to being bothersome.
I’ve been trying to think about where OOO and OOF can fit into this. I think there are things that can be touched on in terms of embodiment, but also bias. It’s weird how people I know refer to the devices as “she”, and even I find myself picking the female voice option for the google home vs the male one. There’s also the idea of things doing actions for themselves, or one another, vs you.
Some difficulties I had this week around filming were mostly due just to my lack of experience making videos. I ended up w/ some background machine hum, and realized that next time around, I’m going to have to mic the device, or think of a location that doesn’t have totally exposed HVAC. I’m going to have to do a better job of white balancing things as well. It is kind of neat how the Hue lights are so bright though. I forgot they were 800 lumens each.
I’ve been thinking about what kind of systems I could set up to work with things. OCADU on-site is proving to be difficult because the main WiFi is enterprise, which is incompatible with most IoT devices. And even in the case of the shadow WiFi, there’s some limitations. The Hue lights are lovely to work with, but their lack of remote API is a bit annoying, as I have to filter everything through IFTTT.
Home could be run on a raspberry pi. Ideally, I’d like to host my webhook there, and take advantage of Hue’s zigbee / local protocol.
My dev setup for just working, involves exposing a local port, and using ngrok to make a secure tunnel.
IFTTT is usually alright, except I do notice some delays. I need to restructure some code to hopefully make it a bit smoother.
I feel like my bibliography just got a lot longer.
Basic flow for user interaction with a google home assistant that won’t assist you.