Warning: sizeof(): Parameter must be an array or an object that implements Countable in /home/engelk/public_html/blog/wp-content/plugins/wordpress-firewall-2/wordpress-firewall-2.php on line 300

Warning: sizeof(): Parameter must be an array or an object that implements Countable in /home/engelk/public_html/blog/wp-content/plugins/wordpress-firewall-2/wordpress-firewall-2.php on line 300
The Geek Movement » technology

technology


We have been talking a lot about the Reading Glove at conferences and events, and are trying to get more people to make gloves of their own to play with.  In the interests of providing the most up-to-date information possible to potential collaborators, here is the most recent version of the glove circuit.
Glove Diagram v.5

After several months of trial & error we have finally completed a working first prototype of the TUNE glove, recently re-named the “Reading Glove”.  The Reading Glove combines an Innovations ID-12 RFID reader, an XBee series 2 module (on a Lilypad XBee Radio), and Arduino Lilypad, and a simple homemade power supply to transmit RFID tag information wirelessly to a Laptop running Max MSP.  Read on for specific details on how it works, what it does, and how to make one of your own.

The complete circuit

Overview:

Watch our Video of the Reading Glove in Action
The Reading Glove is an RFID based interface for interaction with tangible objects.  It allows interactors to manipulate and handle tagged objects in order to access digital information that has been “embedded” in them.  Each object in this interaction is marked with a unique RFID tag.  These unique identifiers allow the object to be associated with specific digital information, in the form of audio, projected visualizations, and text.  The glove is comprised of an Arduino Lillypad Microcontroller, an Xbee Series two wireless radio, and an Innovations ID-12 RFID reader, embedded in the palm of a soft fabric glove.  The Reading Glove transmits the Tag information to a computer running Max MSP, which uses the tag information to trigger digital events.

(more…)

Returning to the hardware prototyping after a bit of a hiatus, we started plugging away again at the intricacies of serial communication with the Lilypad.  Yesterday we got the XBees up and running again, so today we tackled the Lilypad + RFID reader, a combination that we had not gotten working before.  (more…)

Our submission to UC Santa Barbara’s Bluesky Innovations Competition, which was themed around Social Computing in 2020 took 1st place!  Our project envisions a world where mobile technologies have followed their current trajectory, further blurring the lines between online and offline spaces.  Taking a page out of Corey Doctrow’s book we consider the possible implications that this trend will have on our ability to manage our privacy against our desire for digitally augmented socialization.  Below is the award winning essay and the creative visualization that we designed to supplament it.
SENSe

By Karen and Joshua Tanenbaum
The imagined social technology of SENSe (Socialization, Exploration, Negotiation, and Security) is a natural extension of two current trends in social networking: social presence and privacy concerns. It is evident that the growth in popularity of services like Facebook, Twitter, Flickr, and Google Talk and the parallel increase in mobile device usage are symptomatic of larger changes in the nature of social spaces, private spaces, and human interconnectedness. Already, we have seen how social networking supports the emergence of a form of ambient social presence. People now think nothing of signaling their receptiveness to phone calls by toggling a status indicator in Skype, while Twitter and Facebook allow users to periodically broadcast short status updates to their entire social circle. These updates and status indicators foster an “always‐on” sense of one’s social geography: what people are doing right now, minor incidents that occurred throughout their day, how they are feeling and what they are planning. Our new networked world supports the dramatic and the mundane in seamless concert. When disasters occur, these services support efficient real‐time coordination of rescue and relief efforts; when history is made,people around the world receive it in a thousand tiny haiku. If you see that a colleague is having lunch down the block you might join them for a bite to eat; if you see a friend is sad or angry about something you might call to offer comfort. The combination of distributed social broadcasting and pervasive mobile devices is a potent one that has already changed how we communicate in dramatic ways.

(more…)

I recently bought myself the Thinkgeek Phidget RFID Kit to play around with. I was sad to discover that for some reason the Python libraries don’t work on my system (some combination of Python version + Vista + who knows yields errors that I don’t have the knowledge or energy to fix).  However, I was able to get it working in Processing.  Since I’ve seen other sites which claim that Phidget and Processing don’t work together easily (this may have been true in prior Processing releases), I thought I’d put my efforts out there for other interested parties to use.

I relied heavily on this post from Mikko at the Umeå Institute of Design, which shows how to work with a Phidget Accelerometer in Processing. I began by downloading the Phidget Java library files, and I was working with Processing 1.0.1.  First thing to do is, in the Sketch menu, select “Open File” and then navigate to the Phidget21.jar file that came from the Phidget Java code.  This will import the necessary library into the code folder in the Processing sketch. Then I basically altered the code in the “RFIDExample.java” to work in processing, using the accelerometer example as a guideline.  The full code is below. (If you copy it into Processing and Auto Format it, it will probably be more comprehensible) It creates a small window that, if everything is working right, prints the ID of a tag brought close to the Phidget Reader.  There is more detailed information printed in the system out window as well.  Pretty basic, but from this framework the program can go in any number of directions. Hopefully this is a useful starting point for anyone looking to integrate RFID capabilities into Processing.  Let me know in a comment here if you find any bugs or have any suggestions.

(more…)

It’s been a while since I posted anything here, not because it has been quiet following the thesis defense, but because things have not slowed down at all in the last three months.  Today I’m taking a bit of time to catch my breath, and to catch this blog up on the newest scheme concocted in the labs of Team Tanenbaum.  Karen and I have been tossing this idea back and forth since early in the summer, but have recently resolved to try and run with it in earnest (note the copious use of ball-game metaphors).  Below, you will find two versions of the TUNE documents.  The first is a section out of a recent grant proposal I wrote.  It is focused on one aspect of my research within TUNE.  The second is an extended description of the project, which encompasses both my and Karen’s interests more fully.
(more…)

Identifying Components

In a previous post, I laid out the basic definitions of user modeling for ubiquitous envifornments. In this followup post, I go into further details regarding how one actually goes about doing user modeling in ubiquitous environments. In an foundational work on user modeling, Adaptive User Support the editor Robert Oppermann identifies three parts of an adaptive system: “an afferential, an inferential and an efferential component”. While alliterative, these terms are somewhat obfuscatory, I find. I have come to think of them as “input, reasoning and output”. Not as catchy, I know. Here’s what the terms refer to:

(more…)

Clarifying terms

My dissertation work is in the area of “user modeling for ubiquitous environments” and some days even I am not sure what that means. But I’ve been wrestling with it for about a year now, and I think I’m at the stage where I have a grasp of the basic idea and am beginning to get a handle on the areas of contention. In this post, I am attempting to formalize this knowledge as a way of doublechecking myself, and possibly getting some feedback on my understanding by showing the writeup to my supervisor and other interested parties. Let me begin by focusing on each of the components of the loaded phrase “user modeling for ubiquitous environments”.

(more…)

Current State of the Geek: Wherein I briefly bring the internet up to date on my doings and happenings. Don’t get too excited – until the semester ends I’m far too busy to really bring this site up-to-date.
Projects on the back burner:

Transmission: I love my novel, but it has sadly been left untouched while I negotiate the shifting terrain which is my priorities while at SIAT.
The Geek Game: Briefly reared it’s ancient and venerable head, but has been allowed to once again slumber in the depths of my consciousness until such a time as I need to call forth it’s power to smite my foes. Wait…what were we talking about again?
The LARP Project: This project briefly flirted with Karen’s doctoral research, but didn’t get to second base due to improper hygiene and bad taste in movies. Currently is moping around at home wondering why it will never know true love.

Ongoing Projects:

Scarlet Skellern and the Absent Urchins: My current priority. An interactive narrative exploration of user emotion and context. My top secret collaborator and I are almost finished designing and implementing a simple user model in Flash which will allow readers of our interactive comic to express their mood and emotions to the system, thus altering the contextual elements in which the narrative is situated. In plain-speak this means that if the reader indicates happiness to the system, then things like the lighting, colors, textures, ambient noises, musical themes, time of day, weather, and other environmental aspects of the story will shift subtly to reflect their mood back at them. Keep an eye out for our prototype by April 13th.

rePhase:
Project Description
Project Diagram
Submitted to: The Vancouver New Forms Festival

rePhase is an interactive audio installation that repurposes abandoned stereo components into an immersive participatory musical experience. Comprised of structural and audio components rescued from junk shops, thrift stores, and surplus dealers, rePhase gives abandoned objects an opportunity for a second life.

Untitled Film Project: This one stays largely under wraps until I have more time to develop it. What I can say is that it is set in the near future, it will incorporate opportunities for multi-linear interactive narratives, as well as traditional linear storytelling, and that it will be a grotesque hybrid of Primer, Office Space, and Clerks, only much more noir.

El Institute of Inappropriate Interfaces (EIII): From our upcoming web launch:
“Here at the Institute, we are investigating revolutionary new ideas in interface design. Ideas so new and brilliant that no one has ever attempted them before! Prepare your children, your friends, and your domesticated animals, for technology so cutting edge that it is considered dangerous to run with by 9 out of 10 mothers, and for ideas so satisfyingly delicious that you should wait at least an hour before swimming after thinking about them!”

Among the EIII’s initial launch offerings will be the much awaited Audio Interfaces for the Deaf, and the long hyped EZ-Prototype Oven.

LIFE – Low resource Improvised Filmmaking Environment : Special thanks to Jim Bizzocchi for the acronym. Still very much on the drawing board, but still a force to be reckoned with on the project list. Look below for some of the theory surrounding it, under it’s previous acronym: LRIF.

That’s the state of geek at the moment. I’ve been composing music like a madman this month for Scarlet Skellern, so keep an eye on my personal site for the soundtrack as I start to nail the final tracks down. See y’all in the summer.

As Josh stated, we are heading up to Canada next month to begin (or return to, in my case) our lives as poor grad students. We will both be attending the School for Interactive Arts and Technology at Simon Fraser University. I will be pursuing a PhD in Information Technology, and Josh will be working on an MA in Interactive Arts. We’ll be taking classes in new media, narrative, technology, artificial intelligence, design, and culture. Can’t wait!

Next Page »