The Practice of Constructive Design Research

– a conversation during:

DRS 2014 conference Logo

This year’s DRS conference was redesigned. Apart from regular paper sessions, the program now featured a debate at the start of each day and the new format of ‘Conversations’.

The (open) format of a conversation seemed to me very well suited for the methodological thread I have been spinning in my project. In a previous post I discussed a paper I wrote for the IASDR’13 conference regarding my search for methodological support in existing discourse on Research through Design/Constructive Design Research.
For the conversation at DRS2014 the intent was to bring the daily practices of design researchers in this field to the forefront and avoid more abstract reflections.

Actually, the intent initially was to approach possible reasons for much of the discourse in Constructive Design Research to be centered on: its academic standing, the kinds of knowledge contributions it provides and the forms and formats it is published in. Those reasons may have to do with the various and possibly disparate academic backgrounds of people practicing this kind of research.
But that, again, could very quickly become an abstract , political discussion, of little practical use for projecting and navigating my own project. So instead I decided to leave those issues implicit at this point.

My conversation at DRS2014 was titled: The Practice of Constructive Design Research.
Catalysts (=invited participants) for the conversation were researchers in this field: Lorenzo Davoli, Mahmoud KeshavarzPierre Lévy and Ambra Trotto.
In order to feed and frame the conversation, I made a video containing statements taken from interviews with more consolidated researchers: Pelle Ehn, Daniel Fällman, Caroline Hummels, Johan Redström and John Zimmerman.
In the interviews, ranging from 15 minutes to a full hour, many things were discussed that I could not or did not want to use for the purpose of this conversation. They were very rich for developing my ideas in various ways. I’m still thinking about what and in what form to publish regarding those.

The conversation was well received it seems. There was a good number of people in the room and many people joined in. We received much positive feedback on how the session went. For me it was a success in the way that the conversation seemed to develop consistently and we avoided too much abstraction.
Having spent many days on the preparations (interviews and editing the video), to me it felt like we only started scratching the surface of many aspects that I had hoped to dig into much deeper. Maybe my expectations were a bit ambitious for a 90 minutes discussion; maybe I set up the conversation still rather broad. All in all, I have (and am) learning a lot from reflecting on this event, both on the subject and on how to do these things.

Integral conversation

Conversation Starter video

All in all, this year’s DRS conference was one of the better conferences I have attended.
Throughout the conference there was a friendly critical and engaged atmosphere; there was incentive, time and space to exchange and actually develop ideas. The plenary debates at the start of each day I think were a great way to set the tone. The informal and friendly but knowledgeable and capable attitude of the organizers certainly gave a good example. Similarly the branding, graphics design and routing of the conference was simply well done (Art Direction by Marije de Haas). And then I think there are the difficult to grasp qualities of the isolated but international atmosphere of Umeå Arts Campus.


Audience Interactions

In recent years there seems to be a wave of Audience Interaction (participation?) technologies. Here I just list a few that I am aware of, if you know others, let me know!

One of the earliest systems I know of is an experiment by Loren Carpenter presented at the Siggraph conference in 1991, (historic video here). He provided an audience with small paddles, reflective green one side and reflective red on the other. Using cameras and some algorithms he projected just these paddles on a screen. Then he went through a series of applications for these image processing algorithms. Ranging from simply showing the paddles as pixels to having (parts of) the audience control the movements of game-elements depending on the red/green ratio. For example the paddles of a game of Pong.

Cinematrix game example


This technology was patented and further exploited in a company called Cinematrix.


At ICMI2002 , Dan Aminzade, Randy Pausch and Steve Seitz presented work on Interactive Audience Participation. On his (old) work pages, Aminzade presents some of the ways the techniques presented were implemented. Watch him talk about it:


Less interactive, but still causing awesome effects are systems that more or less turn audience members into pixels.

in 2012 FanFlash saw its premiere on German TV:


Recently you may have seen the halftime show of the 2014 Superbowl. It’s the clever guys at PixMob that were responsible for the light effects there.

They have done and are still doing various versions of their technology for turning the audience into pixels that are part of the light show. My personal favourite is the beachballs at Coachella in 2011.

Someone who has always been very effective at engaging audiences is DJ Tiësto. He uses PixMob’s audience-as-pixels technologies well in his sets.


Another company, Embraceled, has a similar system that was used at at a party called Sensation.

They also promote another application for business events, that bridges online and offline social networking. (Warning: corporate company presentation video with cheesy background music, in dutch)


People as pixels centrally controlled, reminds me of something:





I am still musing on what axis or in what space these different projects may be placed.


Scalable group touch: Spots

screen capture of 3 spots

Up until now, I have avoided making computer (screen) based group interaction media. The reason being that it seems more fitting to work in the tangible, physical domain; We understand things when we engage our bodies in interacting with the world on a level different from when we interpret things rationally.
I have made contraptions and installations through which groups of a few more than two people can interact. The problem with working with such media is the cost (in terms of time, effort and money) involved in scaling up to much larger groups.
Screen based interaction media are more easily scaled. A middle-road between completely screen-based and completely tangible, seems to be the mobile phone. Also for the role that they play in our lives it seems to be quite fitting to develop an interaction medium on that platform.

So I did.


In its current  -pilot- stage, Spots allows people to experience a sense of mutual touch through a mobile phone.
When you start Spots, you see an empty grey screen. When you drag your finger across the screen, a spot of approximately the size of your finger follows your finger. These actions are broadcast to other people running Spots.
When other people with the app come ‘online’, a vaguely visible spot appears with a soft sound and their touch actions leave fading traces. When your spot and one of theirs overlap, a vibration can be felt and a quickly fading ripple is visible, showing a trace of your touch.

You can try it out! Currently there are these versions:
Spots for Android
Spots for Mac (nb: this is not a signed app, see below)
If the app starts but nothing happens, either you have no internet connection or the server is down. Tap/Click the top 10 pixels of the app screen to bring up some detailed info. Let me know your experiences.
On Mac, if it says the file is damaged when you try to open it, you need to allow your mac to run unsigned applications: go to System Preferences>Security&Privacy, on the General tab select Allow applications downloaded from: Anywhere.

The video above shows what it looks like from the Spots server point of view (which runs on a mac).

Spots is a re-imagining for my own research context of the app touchThrough developed by Gabrielle Le Bihan cs. (Gabrielle has presented touchThrough at TEI’13 and published some of her research with it at CHI’13).

Yes, I am aware of the very nice Feel Me project by Marco Triverio. There certainly are similarities, but our intentions are quite different in my view.
I am developing a research design that mediates between an individual and the group dynamics he is part of. Marco developed an app that aims to establish an intimate link between individuals.

Article for dutch journal of medicine

The Dutch Journal for Medicine (Nederlands Tijdschrift voor Geneeskunde) approached me to write an article for their end-of-year issue, themed ‘the Bionic Man’.
I took the assignment as an opportunity to muse about how my research relates to other things that extend human capacity.

It has been an enlightening experience, in regard of the subject, but also in regard of the process of this kind of writing. In this case the editors had an overview of all the articles that would make up the theme-issue and they had a particular purpose in mind for my slot. Had I known this from the start, I might have been less disappointed by the extensive editing that was performed on my original article, radically changing the purport. Nevertheless, the work done might come in handy as part of a chapter for my thesis.

Now that the issue has been published, I understand the changes made. I am happy to see that much of the philosophical and ethical points addressed in my original article do get attention in other articles, even if they got removed from mine.

My original article is titled: “Wat maakt de mens” (double meaning: “What makes a human” and “What do humans make”)
My article on the NTvG’s website is titled “Transformatie gedreven door techniek” (Technology-driven Transformation). The full article is behind a pay-wall. Mail me for a copy.

The Wickedness of Design Research Practice- IASDR 2013

We submitted a paper that I wrote together with Johan Redström to the IASDR’13 conference in Tokyo. The paper was accepted and I consequently went to Tokyo to present it. Below you can see a video of me doing my talk at IASDR and you can review the slides I used in it.

The paper can be found here: The Wickedness of Design Research Practice. (or from the conference website here)

In this paper we look at current themes in (interaction) design discourse in order to find support for the difficulties I encounter in the structuring and projecting of my own design research practice.

Continue reading

TEI 2012, Kingston Canada

February 19 – 22, 2012, the Tangible, Embedded and embodied Interaction conference 2012 took place in Kingston, Canada.
As Camille and I taught a studio ‘Designing Haptics’ there, we were also able to attend the rest of the conference.

All the paper sessions of the conference were streamed live as well as archived and can now be reviewed online.

One of the results of the workshop:

Other results can be viewed through the wiki-page.

‘Affective Computing’ – Affective Interaction

Over at, they have been publishing chapters of an online interaction design encyclopedia, where each chapter is written by an expert in a particular area in the field. I had come across it before for the chapter on Social Computing, which discusses crowd-sourcing type implementations of social computing such as wikipedia and scoring systems in e.g. Amazon and discussing mediated communication systems like twitter and facebook. A nice read and particularly a possibly helpful breakdown of crucial elements of such social systems.

Now I am posting as there is a thought provoking discussion of Affective Computing in response to a chapter of the same name written by Kristina Höök. The chapter  is followed by a well written response from prof. Roz Picard, who coined the term Affective Computing and drafted the first maps of that then poorly-charted terrain in her book Affective Computing. Next a rather artless response from one of the bigger names in the field of design and emotion, prof. Paul Hekkert. I do have to agree with the jest of his response. He amongst others, quotes the work on the compelling concept of ‘inherent feedback’ by my colleague and friend Miguel Bruns Alonso. The discussion centers on wether some HCI/AI approaches to Affective Computing is reductionist and cognitivist versus a more holistic and phenomenological approach to Affective Interaction that Höök describes. the Interactional Approach. That concept reminds me of the work of Stephan Wensveen, e.g. his paper at DIS in  2000 Touch Me, Hit Me and I Know How You Feel. A Design Approach to Emotionally Rich Interaction and then in his 2005 dissertation: A Tangibility Approach to Affective Interaction.

Against this backdrop I am currently reading the PhD thesis work of Joris Janssen, Joris’ work at first sight seems to take a reductionistic approach to empathic mediation (what he calls physiosocial technology), but particularly in the later chapters walks he shows examples of both a reductionist -lab based- approach and more holistic -real life- setting.
What fascinates me most in Joris’ work is the idea of technology that supports (even promotes?) empathy in dyads (2 people). Of particular interest for me is his (forthcoming) research on providing some form of feedback based on the correlation between one persons physiological signals and those of another. About a year ago Joris and I discussed social bio-feedback in more detail, after we got into contact over my Masters’ thesis on that topic and his paper on Intimate Heartbeats. Particular issue we discussed then was the modality of such feedback.

Control theory demo

A nice example of control theory relevant to interaction design, made by Aldo Hoeben (thanks for the link!) at the TUD IO StudioLab, back in 2002/2003

full screen

DDD: Light

Lamp_turret — Once in a while the DQI group at ID TU/e does a few days’ workshop: DQI Doing Days. This time we explored the design of complex systems through creating dynamic light objects that communicate. Three groups each created two objects that communicate either through their light behavior, wirelessly (through xBee’s) or both. We were asked to create functions in the arduino code that could be activated  by other light-objects. This workshop was organized by Remco Magielse and Serge Offermans of the Intelligent Lighting Institute. My group set out to create an object that would disrupt the behavior of the others’ objects, both through light and through xBee communication. We created two turrets, one with a light sensor and one with 3W LED’s. The turret with the sensor continuously searches for the lightest spot around it, telling the other turret where it is. The turret with the super bright LED’s will then turn in the same direction to ‘blind’ the brightest spot. (we assumed that other light-objects would also have light sensors and that delivering a bright light to them would then disable any of their behavior that would be based on varying light levels). In the end the behavior is a bit more surveillance than virus-like. Continue reading

One end of the spectrum: Inspired by Cololo

I am building the slider systems (v01 and v02) to do some experiments from which we hope to study how varying qualities of an interaction medium influences the experience.
A project that shows a minimalist version of mediated interaction and telepresence, is Cololo, from the Uchiyama lab in Tsukuba. (be sure to have a look at some of their other projects)

I have now gotten my slider system to mimic the Cololo behavior as follows: When one slider is moved, the other slider moves randomly for about 4 seconds. During that period, the system does not respond to input on either slider.

I set out to maintain the possibility to have feedback on slider A about the random movement of slider B in response to the initial moving of slider A. However this proved rather tricky in a closed loop feedback system. It did teach me a thing or two about how to implement such behavior. Moreover it proved once more on the one hand that my current platform has its limits for more complex behaviors, and on the other hand my own limitations when it comes to ‘control systems’ theory and implementation.

Luckily the Cololo system doesn’t have any direct feedback, so for now I don’t need it. In my current code I applied a bit of a blunt method to get the Cololo behavior. In future iterations I will definitely need the feedback, so I will have to come up with a more elegant solution. I am now looking into possible collaborations with experts in the field of mechanical engineering and control systems theory.
My current arduino code for the Cololo behavior can be downloaded here (as a zip archive).