hive
Nowadays, we are co-located but isolated. The computer renders our intellectual activity anonymous within physical space. Before the digital age, we used to have cues of work activity via book covers, writing tools and scattered notes. These artifacts encouraged serendipitous interaction. But now, we sit nearly still, staring at our computers, with minimal gestures, and no hint of our work. In fact, we are more connected to those in our online social circles than who surrounds us. This is a missed opportunity for conversation, sharing and collaboration, especially in interdisciplinary educational and work settings.
hive is a ambient spatial media project that aims to enhance collaboration and sharing in physical spaces. It is Twitter-meets-Physical-Context, where the system visualizes and shares user's work status using a combination of website input, kinect sensing and projection.
The system consists of kinect, a hive website, a c++ desktop app, and a projector. The website is developed using Sinatra (ruby) and DataMapper. The data received from the website is parsed in .csv format, and received by the c++ desktop app using the ofHttpUtil library. Alongside, Kinect data is collected and processed using the openCV library within openFrameworks. The Web and Kinect input is used to determine the projection content onto the table surface.
Collaborator: Luisa Pereira-Hors
Instructors: Jared Shiffman (Spatial Media), Ruxy Staicut (Comm Lab Web)
hive is a ambient spatial media project that aims to enhance collaboration and sharing in physical spaces. It is Twitter-meets-Physical-Context, where the system visualizes and shares user's work status using a combination of website input, kinect sensing and projection.
The system consists of kinect, a hive website, a c++ desktop app, and a projector. The website is developed using Sinatra (ruby) and DataMapper. The data received from the website is parsed in .csv format, and received by the c++ desktop app using the ofHttpUtil library. Alongside, Kinect data is collected and processed using the openCV library within openFrameworks. The Web and Kinect input is used to determine the projection content onto the table surface.
Collaborator: Luisa Pereira-Hors
Instructors: Jared Shiffman (Spatial Media), Ruxy Staicut (Comm Lab Web)
March 2012
Filed under academic, itp, interaction, installation