Category Archives: tinkering

Scalable group touch: Spots

screen capture of 3 spots

Up until now, I have avoided making computer (screen) based group interaction media. The reason being that it seems more fitting to work in the tangible, physical domain; We understand things when we engage our bodies in interacting with the world on a level different from when we interpret things rationally.
I have made contraptions and installations through which groups of a few more than two people can interact. The problem with working with such media is the cost (in terms of time, effort and money) involved in scaling up to much larger groups.
Screen based interaction media are more easily scaled. A middle-road between completely screen-based and completely tangible, seems to be the mobile phone. Also for the role that they play in our lives it seems to be quite fitting to develop an interaction medium on that platform.

So I did.

Spots

In its current  -pilot- stage, Spots allows people to experience a sense of mutual touch through a mobile phone.
When you start Spots, you see an empty grey screen. When you drag your finger across the screen, a spot of approximately the size of your finger follows your finger. These actions are broadcast to other people running Spots.
When other people with the app come ‘online’, a vaguely visible spot appears with a soft sound and their touch actions leave fading traces. When your spot and one of theirs overlap, a vibration can be felt and a quickly fading ripple is visible, showing a trace of your touch.

You can try it out! Currently there are these versions:
Spots for Android
Spots for Mac (nb: this is not a signed app, see below)
If the app starts but nothing happens, either you have no internet connection or the server is down. Tap/Click the top 10 pixels of the app screen to bring up some detailed info. Let me know your experiences.
On Mac, if it says the file is damaged when you try to open it, you need to allow your mac to run unsigned applications: go to System Preferences>Security&Privacy, on the General tab select Allow applications downloaded from: Anywhere.

The video above shows what it looks like from the Spots server point of view (which runs on a mac).

Spots is a re-imagining for my own research context of the app touchThrough developed by Gabrielle Le Bihan cs. (Gabrielle has presented touchThrough at TEI’13 and published some of her research with it at CHI’13).

Yes, I am aware of the very nice Feel Me project by Marco Triverio. There certainly are similarities, but our intentions are quite different in my view.
I am developing a research design that mediates between an individual and the group dynamics he is part of. Marco developed an app that aims to establish an intimate link between individuals.

Advertisements

DDD: Light

Lamp_turret — Once in a while the DQI group at ID TU/e does a few days’ workshop: DQI Doing Days. This time we explored the design of complex systems through creating dynamic light objects that communicate. Three groups each created two objects that communicate either through their light behavior, wirelessly (through xBee’s) or both. We were asked to create functions in the arduino code that could be activated  by other light-objects. This workshop was organized by Remco Magielse and Serge Offermans of the Intelligent Lighting Institute. My group set out to create an object that would disrupt the behavior of the others’ objects, both through light and through xBee communication. We created two turrets, one with a light sensor and one with 3W LED’s. The turret with the sensor continuously searches for the lightest spot around it, telling the other turret where it is. The turret with the super bright LED’s will then turn in the same direction to ‘blind’ the brightest spot. (we assumed that other light-objects would also have light sensors and that delivering a bright light to them would then disable any of their behavior that would be based on varying light levels). In the end the behavior is a bit more surveillance than virus-like. Continue reading


One end of the spectrum: Inspired by Cololo

I am building the slider systems (v01 and v02) to do some experiments from which we hope to study how varying qualities of an interaction medium influences the experience.
A project that shows a minimalist version of mediated interaction and telepresence, is Cololo, from the Uchiyama lab in Tsukuba. (be sure to have a look at some of their other projects)

I have now gotten my slider system to mimic the Cololo behavior as follows: When one slider is moved, the other slider moves randomly for about 4 seconds. During that period, the system does not respond to input on either slider.

I set out to maintain the possibility to have feedback on slider A about the random movement of slider B in response to the initial moving of slider A. However this proved rather tricky in a closed loop feedback system. It did teach me a thing or two about how to implement such behavior. Moreover it proved once more on the one hand that my current platform has its limits for more complex behaviors, and on the other hand my own limitations when it comes to ‘control systems’ theory and implementation.

Luckily the Cololo system doesn’t have any direct feedback, so for now I don’t need it. In my current code I applied a bit of a blunt method to get the Cololo behavior. In future iterations I will definitely need the feedback, so I will have to come up with a more elegant solution. I am now looking into possible collaborations with experts in the field of mechanical engineering and control systems theory.
My current arduino code for the Cololo behavior can be downloaded here (as a zip archive).


Iteration on first build: Slider v02

Moving on from Slider v01, I’ve tried to improve the performance of the system and to move towards a more stable, flexible and -hopefully- reusable arduino code.

In this video you see a motorfader box on the left and another one on the right. In between is a Tablet connected to a computer, of which you see part of the screen at the bottom.
With the pen and tablet, I control the botton slider on the screen (the white slider). The top green slider represents the box on the right and the bottom one the one on the left. The size of the box in the white slider shows the difference between the pen-input and the average position of all 3 sliders together.

The electronic circuit that controls the motorfaders has not changed from v01, I mostly re-wrote the arduino code to include a PID-controller (well, just using Proportional and Derivative component), inspired by the wikipedia and this arduino.cc post.

I have added the possibility to connect the system to a  Max/MSP patch, to have a third slider to interact with the system.
However, the serial communication increases duration of the program loop, and this influences the gritty and the ‘friction’ feel on the sliders as well as the stability of the whole system.
Another TODO point is to implement error detection/correction in the serial communication as now there seems to be some noise from this.
It also seems that the whole system ‘wiggles’ to the high slider positions when MAXMSP is attached. So far I have no clue why. ( I have added some smoothing/averaging on the analog reading in the arduino, and it seems to help a little).
Arduino code:

Max/MSP patch


The first build: Slider v01

slider disassembly
To set off my research in a hands-on fashion, I am building a system that enables a haptic connection between two (or more people). The system consists of modules each containing a motorfader, where each motorfader tries to follow the position of the other. Central idea of this system is that action and (haptic) feedback is collocated. (ref. Wensveen)(expand with theory?: Lenay, Deckers, Gibson, Merleau-Ponty)

The goal at the outset of this building project on the one hand is to learn new and hone existing tinkering and prototyping skills, on the other hand it is to literally get a feel for what it means to create a haptic connection between two and more people and to explore the feel of different variables in that connection, e.g. elasticity/firmness, friction. time-delay.

To some extent this system echoes inTouch, a classic tangible interaction system built by Scott Brave, Andrew Dahley, and Professor Hiroshi Ishii of the Tangible Media group, MIT Media Lab.

(more details after the jump)
Continue reading