TUIO / IO Socket To Trigger JavaScript and CSS3 Animations
TUIO can connect to the web browser through IO Sockets, thus using JavaScript to trigger particular animations or visualisations on the page. I have managed to produce a sort of form where text can be input to produce a query. Nevertheless, I wanted to perform those queries only with physical objects (fiducials). Furthermore, I have found very difficult to find alternative options for users to input the particular query; for instance the name Picasso. For this reason, I am mainly focusing on metadata elements (who, what, when, where) as TUI objects to assist the query production.
The main concept is that Europeana allows users to query their data through an API. Therefore, it is necessary to produce a URL where the query metadata and data is input. For this, JavaScript can just join the elements once the full query is produced. Therefore, it is difficult to produce a particular URL. This interface helps users to organise their ideas by assigning a particular form field to each fiducial object.
Here is the code to join the URLs. Special thanks to @WillFyson
In order for users to identify when the active object (field) is active, I have decided to animate the placeholder background. For this I have used JQuery and CSS3 to animate them. Therefore, if the fiducial enters, it triggers the animation, if it’s updated, it focuses on the particular field so users do not have to click. Arguably, under this approach, users only have to worry about linking the things they are looking for and the name of the object that they are looking for. Once the query has been formed, it retrieves the data in JSON, and it is further visualised by using a JQuery UI list element. The video below shows how the animations work once the object is placed on the camera range and how objects are retrieved through Europeana’s API and visualised on the website.
Querying With Tangible User Interfaces. A User Centred Design Experiment.
I have been investigating how users ask question to navigate and explore content (data) of museums and other Cultural Heritage (CH) organisations. This might seem more straightforward when a specific museum collection has been set up. Nevertheless, when integrating data from the different CH organisations, the way people see the content and how it’s hosted might be different. Users will approach libraries in a different way they approach museums. Under Europeana, many of these organisations are integrated in the Europeana Data Model (EDM). This way we can describe people (creators), places, dates, date periods, objects and many other descriptors to produce more accurate answers. Despite all the effort and the sturdiness and accessibility of the data, it is still very complex not only to t to query it, but is also difficult to grasp the complexity and extent of the knowledge encompassed under it.
As mentioned above, Europeana as an organisation has integrated into a single space in the data model. Therefore the data and information is there. Despite this, it can be argued that there is not yet an optimal tool to produce knowledge from it. My research aims to find out most optimal ways to engage with such information so users can produce knowledge from it. In previous post and academic publications, I have discussed the different approaches that can be taken to develop such engagement tools thus arguing for the use of Tangible User Interfaces as a possible solution. Therefore, to understand user needs, I devised a User Centred Design experiment where I designed over 50 different tools (icons) that users potentially require to ask questions to a data repository such as Europeana’s.
Icons for User Centred Design study.
I realised that this produces the same amount of complexity as if working with a common Graphical User Interface (GUI). For this reason the study aimed to identify particular personas based on their digital generation, digital skill, and cultural heritage background and web tools knowledge among others. It is important to keep in mind that these icons represented TUIO actions, query actions and logic operators as well, that is the reason of performing such experiment, to reduce such complexity.
Participants were asked to find particular artefacts such as: Picassos (things) that were not made by Picasso, or XVII Century objects from France. These questions might seem simple but arguably, there is a certain level of complexity that might hinder engagement with the content when querying for those results. These question can be asked in through Europeana’s API’s access or through the SPARQL Endpoint. But many users will find complicated to query through those particular approaches and even more complicated to learn all the particular query syntax to perform the query. Moreover, the brain has to figure out the logic complexity on top of the syntax and interaction processes. Tangible Interaction can help segmenting those thinking processes and facilitate querying with a particular syntax.
Comparing TUI approach versus API and SPARQL
After performing the statistical analysis, the experiment showed most meaningful approaches that users followed ant their experiences when taking part in the experiment. This provided me with the information of what artefacts (tools), logic procedures and particular user requirements that needed to be implemented in the Tangible Interface to query cultural heritage data.
During the last weeks I have started developing what I would like to call DFPs short for dynamic-fiducial-pyfos. With the help of some friends I have now a basic skeleton to extend my interactive experiments. Here is a video of the result:
Tangible Interaction and Pyfos
After I submitted my upgrade draft, I realised that I was going to encounter some issues when working with pyfos when using them as part of the fiducials for the TUIO system. Since users have to combine different concepts (e.g. Roman + pottery or painter + 1800) this will result in a numerous amount of pyfos. The interface already has several objects that can not be removed since they are part of the basic interactions such as: map navigation, box dragging, etc… Therefore, I decided to explore a little bit further. I need to find a way to extend the capabilities of pyfos.
Pyfos have three main states: token, constraint and token+constraint. The TAC (Token and Constraints) paradigm in Tangible User Interfaces (TUIs) offers a set of constructs of how these objects react. Nevertheless, it can be argued that technology can offer pyfos that can self adapt or expand the constraints that bound them. This due to the fact that physical objects such as pyfos cannot morph or change according to user needs.
Other researchers are exploring how these TAC approaches can be expanded. It is this search that directed me to explore with mini-displays and sensors. Since I had worked with some Internet of Things and Arduino, I thought of designing some display that detected two different RFIDs to make the combinations and display a specific result. That result display presents the final combination in a form of fiducial so the TUIO interface can detect it. This way, users can pre-design a concept combination thus integrating it in a final dynamic-fiducial-pyfo (DFP) that encompasses that prior combination. Most importantly, DFPs can also produce other display results on the table-top without clustering tools or options.
Producing DFPs
There are some alternatives to produce DFPs out there. Many of them require to be built from scratch, but there are some products that can be adapted to our needs without having to fiddle or hack any of the electronics. Although there is a wide variety of tools out there, these are the most ‘approachable’ tools that I encountered.
Educubes
Since I had worked with Arduino and Internet of Things before, I thought of building a mini-display with either RFID or any other type of sensor. I found the Educubesproject that presented a good opportunity to start developing for this idea. This might also prove to be beneficial since there are some TFT mini-displays that support touchscreen actions as well.
Educubes by Adafruit
Although this presented a good opportunity to develop, I needed to start producing tools that could work straight away instead of focusing on the electronics. Moreover I thought that the size of the electronics is still quite big for them to be used on the prototype. But it was mainly the issue of working with a wide range of electronics and hacking them so they can do what I wanted.
Sifteo Cubes
After searching I encountered Sifteo Cubes. These cubes already provide a very nice presentation that encompasses a wide range of electronics and a mini-display. Moreover these cubes can be programmed through an SDK provided by the same company. I decided to jump ahead and ordered a second generation Sifteo Cubes.
My surprise was at the moment of using the SDK. I was not the first user to be put down by its complexity. The Sifteo SDK works with C++ with other command tools to run installations and device management. Moreover, through the forums I encountered that the released SDK contained some bugs, which made some of the tutorials not to work.
Nevertheless, I encountered some compiled SDK in GitHub such as Investio and Sifteo Blickets. They provided me with some hints into how to actually start using and managing the cubes. I still had to learn how to program what I needed. So I started first learning how to make interactions. There is a base of few interactions that are supported by the sensors in the cube that include: tilt, pair, shake and press.
Sifteo Actions
Although it seems quite nice in pictures, the task was not so simple. Since I do not come from a programming background, working with C++ was a huge challenge. First I did a tutorial on neighbouring. Here is the video:
A few days after, I started working with other actions such as tilt and press. It is relevant to mention that I worked using the examples that came with the SDK so, the interactions were pretty much pre-designed and I was just learning a few basic commands that might be used. Here is a video of this stage.
The problem started when working with my specific requirements. I needed an array of options per cube that could be combined between them. This so the final combinations could be applied in the Europeana TUIO system. Using C++ this was not so straightforward. In a nutshell, this is what I needed to create:
Basic combination skeleton
It took me a lot of time and effort to find a way to program this interaction. I could program something like this with other languages but not with C++. Therefore I asked for some help to develop this. Kevin Lesur from One Life Remains gave me a hand with this. So this is was the basic skeleton built for the interactions:
Operations and interactions skeleton.
Two cubes are required to make the combination through neighbouring. When they are combined a third cube presents its combination that will eventually show a fiducial making it a DFP. To navigate between the cubes options, users can tilt the cubes in either direction.
This way I am hoping to now carry on and go back to the TUIO experiments and see how these DFPs work on the tabletop system.
I had my last meeting with my supervisors yesterday. At this stage I have managed to develop enough tools in the interface to be able to present the idea of my research.
The interface is a combination of interaction methods that include haptic and tangible interaction. The technology used in the interface includes computer vision and tabletop interaction based on Web technology such as JavaScript and Web Browser render.
The Upgrade Process
So I have this month to:
Define my contribution
Define methodology
Structure thesis
The thesis should include:
Problem / Question
Literature Review
Methodology
Discussion
Expected Results
I am currently doing a Web Science research. Web Science is an interdisciplinary research group. We are studying the Web from different perspectives. In the case of my research, I am trying to understand how can Cultural Heritage institutions can be enhance their engagement and pedagogic activities online. For this there is a wide range of disciplines that I have included in my research.
(Inter)Disciplinary approach
Human Computer Interaction
Through this approach I am intending to understand how people interact with the computer. Lets remember that on the Web, all interactions occur through an interface. For this reason it is important to study how these interactions might occur.
Human Information Interaction
Under this scope, I intend to produce a better understanding of how people/users relate, and process information. Usually when interfaces are developed designers focus merely on the interaction process without thinking beyond the tool itself. The HCI community has started adopting this process in their methodology but it has not yet been standardised as a design process in HCI.
Pedagogy / Psychology
Among many areas, I am focusing on Embodied Cognition. This thesis presents the idea that the mind is determined by the human body. This is to say that mental processes are not bounded to the mind itself. The body itself is a channel in which the human mind interacts and reasons about the world. This process is also highly linked to Constructivism, where people learn by experience. Constructivist learning experiences have commonly used cognitive processes as a pathway to empower learners. Even though constructivism is not limited to embodiment of activities, it might provide a positive pathway for pedagogic activities.
Enhancing Engagement with Online Museums
Museums as part of the Web are in need of producing meaningful pedagogic activities. The pedagogic element is essential to Cultural Heritage institutions. That is one of their main reasons of funding and even though organisations that are not in the education business can also benefit from such pedagogic activities. Nevertheless, this challenge is not easy to solve. For people to produce knowledge by themselves might not be as straightforward as one might think. There is a wide combination of how people might approach information spaces such as websites, databases, books, etc in order for them to extract knowledge.
By using the aforementioned disciplines in an interdisciplinary manner, I am attempting to produce an tangible user interface where people can be able to ask questions to a linked data system populated with cultural heritage data.
I have been working with TUIO for the main interface in between Europeana’s API and a user. One of my main research questions is how to can users make questions through tangible interfaces. In this case, users should be able to make questions about cultural heritage content from different organisations. In this case users can be to access data from over 30 million metadata records that include books, photos, art, audio and artefacts among others. On one hand having access to all this content may benefit users since they will have vast sets of information to answer their questions. On the other they might get lost with all the data that is available for them.
When working with vast sets of information, users can benefit by dissecting specific items of the information that they are looking for. Nevertheless, this process might prove difficult might require a lot of concentration. By offloading this mental process onto physical objects, users might pace their thinking process and assist themselves by using the physical objects as an aid.
My intention is to produce such objects that can help users solving questions and finding information from a data portal such as Europeana. For this I have created a starting skeleton of essential objects that can then be transformed to queries on the API.
Essential Queries
Who – What
Some of the most essential queries might include who or what are looking for. These will be represented individually.
query = who:"string"
query = what:"string"
Places
Geolocation
By positioning the fiducials on top of a map, users can be able to add geoCoordinates on to the query. This can be used by adding two values either by tapping or by using two fiducials.
Countries are different to geoLocation. Countries are defined by name that are part of a human perception and not a geographic one. This way this fiducial will detect it’s x and y position and add the name of the closest place to the query. There will be several of these fiducials for adding more places to the query.
query = COUNTRY:string
Time
Another value that can be added to the query are the dates for the time period that you want to constrain the data to. Time being abstract, its very difficult to represent and manipulate. Europeana provides a starting year and an ending year. For this reason this will be using two ‘dials’ to forward or rewind starting or ending year.
query = YEAR: 0000 TO 0000
Boolean Operations
Basic boolean operations can be added to the query to produce a more specific question. Operations such as AND, NOT and OR can be used. This fiducial has to be used in addition of the fiducials previously mentioned otherwise there will be no values to compare to.
This marker commonly will require one of the three options. Nevertheless, users might benefit by looking at all the options to analyse what they are asking.
query = where:(Paris NOT France)
Container
To put everything together, users might benefit of structuring their thoughts on a template where they can finalise their sentence or question. This does not mean that they must place artefacts in such order but is merely a starting point to give an approach of how they might structure questions.
There is still a wide variety of query options that can be added, but this is the essential information that users will require to input at a specific time in order to retrieve meaningful results.
I managed to put text in an HTML Canvas based on the functions loaded from the JavaScript code. One group of JS files run the TUIO library, socket andfiducial management and another group of JS files create and place the text into the HTML document. This way I can start placing HTML elements into the document based on particular properties provided by the fiducials once they interact with the table.