Monday, May 10, 2010

HCI Remixed Chapters 1, 4, 5

Chapter 1: My Vision Isn't My Vision: Making a Career Out of Getting Back to Where I Started
Author: William Buxton

Chapter 4: Drawing on SketchPad: Reflections on Computer Science and HCI
Author: Joseph A. Konstan

Chapter 5: The Mouse, the Demo, and the Big Idea
Author: Wedny Ju

Comment: Zach

Summary:

HCI Remixed is a book compiled of multiple authors who have written about their CHI backgrounds. Chapter 1 talks about Buxton's initial skepticism of why he would need to use a computer and goes into to his love for computers in order to help him create his music. He went from a normal student to a master of a computer machine that allowed him to create music with it. There is a chord keyboard that he uses in order to make different cords on the computer. Each finger had a different kind of chord that could be created. There was also a "mouse" that was basically a block of wood with some wires attached to it.
The next chapter talked about a SketchPad system that used a light pen for the display. It also anticipated object-oriented programming before object-oriented programming took off in the computer science world. Sketchpad influenced other programs that we use today such as AutoCad.
The last chapter that we read dealt with the mouse that Englbert and English created. Everyone seemed to focus on the fact that it was a mouse that they could do things with, when creators had bigger ideas that they wanted to share. They wanted to show what was actually possible and how this device could help create bigger and better things for people to actually interact with their computers. Unfortunately their demo wasn't taken in this way and really only focused on the fact that it was a mouse.

Discussion:
I think these were some of the more interesting things we've read in this class. I would have liked to actually have read the entire book but just reading these was better than nothing. It's really cool to see where computing all began.

Tuesday, April 20, 2010

Obedience to Authority

Author: Stanley Milgram

Comments: Zach

Summary:
Milgram's book focused on his controversial experiment and the obedience of the human subjects that were a part of the experiment. The experiment consisted of a subject that would shock another subject, or victim, if they answered a question incorrectly. There were different scenarios of this experiment in order to see how obedient a person would be to authority. Most people will follow the experimenter in a situation whenever they believe that the experimenter has authority. Even if the person in the experiment feels that they are doing something morally wrong they will continue to go on with experiment instead of disobey the perceived authority of the experimenter. When a subject was in close proximity of the person in which they were shocking they still followed the orders of the experimenter because they claimed that they had lowered the status of the victim in their mind. If the scenario changed to where a "normal person" would be giving the orders and the experimenter turned into the victim the subject would still listen to the experimenter and stop shocking them. This shows that the authority of the experimented has some sort of credibility. If the scenario changed to where the experimenter was not in the room the subject didn't feel it necessary to shock at high levels. They also didn't feel responsible for their actions whenever the experimenter was there because they were only following orders and therefore the experimenter is the one who had all the responsibility. It's interesting to see how people change their mind about something whenever they "get caught". The last observation made in the experiments was when the the subject was given a choice in order to shock they continued to shock at the lower levels. This was interpreted to the subject not taking out their aggression with high levels but instead just following the authority they believed the experimenter had.

Discussion:
I think this book was an interesting read and we got a more in depth view than we did when we read the one chapter in Opening Skinner's Box. It's interesting to think that people will listen to someone they believe to have authority even if they don't and how hard it is to break that perceived authority so that they can "be free". It's also a very scary thing to think about because there could be some people who are intimidating enough to feel like they have authority, yet really be someone who's intentions are illegal.

Thursday, April 15, 2010

Re-Placing Faith

Paper: Re-Placing Faith: Reconsidering the Secular-Religious Use Divide in the United States and Kenya

Authors: Susan P. Wyche, Paul M Aoki, and Rebecca E. Grinter


Comments: Patrick


Summary:

This study talked about a six month design project that examined the use of technology for different religious purposes. They focused on understanding the U.S. "megachurches" and got a sense for how parishioners were using technology. Then they did some fieldwork in Nairobi, Kenya. Below are the lists of percentages of religions based on if they lived in the U.S. or Kenya.

Interviews of the participants were conducted in middle and upper-middle class homes. The interviews lasted for about one to one-and-a-half hours and there was a tour of each home or office. Some technology that was used by the participants included SMS text messages for daily bible versus as well as calendar software in order to plan their days around different religious activities. Note-taking was also prominent amongst the different religions. There were still differences amongst the U.S. and Kenya participants in how they took notes as well. The U.S. used the internet in order to supplement their note-taking activities and their churches also had websites with outlines of the sermons. America seemed to be more technology oriented than Kenya was.

Discussion:

I thought that this paper was rather interesting even though I personally don't see the need for technology for my own religious beliefs. Emails and websites seem to be very popular with the church scene now so I understand people wanting to be able to see what is going on in their church without having to drive up there. It will also help people plan ahead when thinking about going to a certain church function.

Tuesday, April 13, 2010

Intelligent User Assistance...

Paper: Intelligent User Assistance for Cost Effective Usage of Mobile Phone

Authors: Deepak P, Anuradha Bhamidipaty, Swati Chall

Comments: Aaron

Summary:
This paper deals with an enhancement for mobile phone design to be more cost-conscious for users. Lower income areas like India and China have found it difficult to purchase mobile phones because of how expensive they can be. Those who do own mobile phones sparsely use them so that they can manage the cost and achieve "cheap-usage". This paper focuses on different techniques that will make the phone assist the user to be cost-conscious so that they can stay within their budget. Usage history logs are a sequence of entries ordered in time that have type, duration, time of activity, and category of recipient. In order to find patterns they used an algorithm described below.

They also made a suggestion generator that would let the user know that they were making over the recommended number of calls in order to stay within their budget. In order to test out both generators they took logs from four people from the lower-income group and some logs from the high-income groups. There were strong call patterns for those in lower-income groups because they are trying to save money.

Discussion:
This paper was interesting because it was exploring a topic that hasn't really been looked into before. In order to make a persons cell phone bill within their budget it is usually up to them to keep track of how many minutes they have used and how much everything they do on their phone costs. Many phone companies have websites that tell you all of this information, but no one has really thought about making your phone help you stay within your budget. I think this kind of application would be useful for people who can't really afford cell phones but really need one to help keep in contact with people. This may also help younger adults learn how to budget their money and still have a cell phone to call their friends with.

Thursday, April 8, 2010

Intelligent Email

Paper Title - Intelligent Email: Reply and Attachment Prediction

Authors: Mark Dredze, Tova Brooks, Josh Carroll, Joshua Magarick, John Blitzer, Fernando Pereira

Comment: Jill

Summary:
This article deals with different types of email predictions. There are reply predictions, attachment predictions, and other tasks that can be treated as binary classification problems. Each email was represented by a sparse vector and in order for learning to occur they used a logistic regression classifier. In order to evaluate the system they used F-measure.

Reply prediction
This helps decide which emails need a user to respond to and allows the user to mark their messages as "replied" and "needs reply".


Each user needs to have a profile that is created from their past email responses and their contact list. If a user is CC'd in an email they are more than likely not required to respond. If the email was directly sent to someone regardless of people being CC'd it is more than likely required of them to respond. Using TF-IDF scoring on the words of an email it can determine the terms frequencies in order to figure out which emails have questions in them. Tests were done on four different participants and their emails.

Since the precision is higher than the recall it is indicating that there were many messages that were difficult to determine if they needed a reply or not.

Attachment Prediction
Not having a document attached to an email that requires the attachment to also be sent can cause chaos amongst the business workplace. When production depends on a document being edited in order to continue and someone forgets to attach it this could cause problems. Some suggestions for this would be to have the email client suggest possible things to attach from previous messages received by the same person.

Each prediction system could be useful for a work setting, but future works will be done on more complex user models in order to solve the problems better.

Discussion:
I thought this article was interesting because sometimes emails can get overwhelming. I know that I've waited for days for a professor or TA to email me back answers to a few questions I had for them and sometimes I wasn't able to continue work on an assignment until they responded. I think something like this, after being developed better, could possibly benefit the users of a specific email client. It may get people to respond quicker or remember to send the appropriate attachment when emailing someone. I know I've forgotten to attach something that was the initial point of sending the email in the first place.

Tuesday, April 6, 2010

Opening Skinner's Box

Book: Opening Skinner's Box

Author: Lauren Slater

Comments: Jill

Summary:
This book discussed the top psychological experiments that have shaped the way we view things today. Each chapter discussed a new experiment.

  • B. F. Skinner: Skinner's experiment dealt with rats who were conditioned to do things for rewards. He found that the mind does well when given rewards, and punishments only hinder the conditioning. Maybe this could be linked to the way children learn from their parents?
  • Stanley Milgram: Milgram designed an experiment where a "teacher" was told to shock the other person answering questions if they got a question wrong. The volts went from a small shock up until the point where the shock was lethal. There were 65% of participants who used the shock level up to the lethal voltage during the experiment. This experiment showed that people are obedient to authority and to just what extent some people could be obedient.
  • David Rosenhan: Rosenhan along with a few friends got themselves admitted into mental hospitals by telling them that they were hearing a voice that said "thud". Even though each participant was actually sane they were admitted anyway and psychologists really thought that they were insane. This experiment caused a lack of trust in psychology and today there it is more difficult to get admitted to a mental hospital but just as easy to get prescribed drugs.
  • John Darley and Bibb Latane: Participants were in rooms of different amounts of people and were shown recordings of someone having a seizure. To them it was an actual person having a seizure and not a recording but since no one else in the room was reacting most of the people who saw the recording also didn't react. This experiment showed that if no one else in a group reacted to the person having a seizure it would take someone much longer to react than it would if they thought they were alone and the only person to possibly help the person having a seizure.
  • Leon Festinger: This chapter dealt with cognitive dissonance. These studies found that people will change their beliefs in order to not sound crazy.
  • Harry Harlow: Harlow did a study on infant monkeys and how they could become attached to a fake "mother" that was soft, versus a "mother" that was hard but gave them food. The babies preferred the soft "mother" but of course when they needed to eat they would go to the metal one. In the end he concluded that touch was important to a babies development.
  • Bruce Alexander with Robert Coambs and Patricia Hadaway: This experiment dealt with addiction and the test subjects were rats. Alexander placed some rats in a nice, clean environment, and then palced other in a solitary, confined environment. The rats were then give clean water and water laced with morphine in both environments. They found that the rats in the confined conditions drank the water with the morphine, while the rats with the nice environment drank the regular water. This experiment suggest that drug addiction isn't a "natural affinity" to drugs but possibly a situational based addiction.
  • Elizabeth Loftus: This experiment dealt with false memories. The participants started to remember things that never really happened to them. They were reciting false memories such as being lost in the mall when they were a kid and each participant was certain that these memories actually happened. Loftus used her findings to keep people from being wrongly convicted of sexual abuse crimes that weren't brought up until years later of the supposed crimes being committed.
  • Eric Kandel: Kandel did experiments on sea slugs in order to show that memory is strengthened by increasing the strength of connections between neurons. By using the neurons from a sea slugs head in order to see process of memory creation. This helped him create memory-enhancing drugs later on.
  • Antonio Moniz: Moniz was the first person to perform a lobotomy on patients who were depressed or had other mental disorders in order to cure them of the disorder. Now we have medicine that does this, but because of his experiments we were able to understand what parts of the brain the drugs should focus on.

Discussion:.
I thought all of these experiments were interesting and it was kind of neat to see where we get our current thinking about certain things from. Some of these experiments seem to be ignored though when people are learning about certain disorders, and it is odd to think that people aren't looking into some of these experiments more to find more humane ways to prove the results. Slater herself is rather interesting and it wouldn't surprise me if there was actually something wrong with her....she at least wrote a good book?

Thursday, April 1, 2010

Automatically Detetcing Pointing Performance

Authors: Amy Hurst, Scott E. Hudson, Jennifer Mankoff, and Shari Trewin

Comments: Aaron

Summary:
There are people in the world that would love to use computers but are physically unable to do the tasks necessary to run something on a computer. This article deals with "pointing" on a computer which basically consists of clicking an item, dragging it and dropping it somewhere, and just moving the mouse to a target that they want to click. In order to adapt to a user with different disabilities it is necessary to create software that will assess the user and automatically change to make it easier for each individual to use a computer. Statistical models were constructed during tests on participants. Each model found correlations between the "occurrence of certain features and and the occurrence of the property it tries to predict. Once a model is created it can figure out if the user will benefit from an adaptation to the software to fit their needs. In order to begin they took a group of people to test the differences in missed clicks, movement speed, and other data that would help design adaptable software. There was a group of people who had no disabilities, and a group that did and the data sets were compared.

They also compared data between young adults, adults, older adults, and individuals with Parkinson's disease. They found that older adults took much longer to reach peak velocity than it did the young adults and "adults". Individuals with Parkinson's had the slowest peak velocity and they also seemed to pause near the end of their movements. Based off this data they created a model to distinguish the differences between each group. The accuracy results are shown below.



Now that they had a good model for detecting what group a person falls into, they wanted to develop a system to predict if a user would benefit from an adaptation of the software they are using. This statistical model had 94.4% accuracy. With such high accuracies on each test they have done they are going to in the future actually try to build software that can automatically asses a user's performance and adapt to their needs.

Discussion:
I think adaptable software for people with certain disabilities that are keeping them from enjoying computer use would be something very useful. Everyone should be able to enjoy computers that wish to and if there is a possibility that we could make this possible then why not do it. I think this software has really good potential for users with disabilities when dealing with every day computing, but also may help people in the gaming world as well. It should be interesting to see what happens if this software is actually developed.

Tuesday, March 30, 2010

Sensonomy

Paper Title -
Sensonomy:
Intelligence Penetrating into the Real Space

Author: Jun Rekimoto

Comments: to come later...

Summary:
This paper wasn't very long so there isn't too much to tell. Sensonomy is an integration of collective intelligence as well as pervasive sensing. It sounds like there are a few papers that Jun wrote that describe what Sensonomy does for a mobile device. They are hoping this feature will by a new way in order to create intelligent systems and interfaces. In some of the papers there will be a city-scale indoor and outdoor positioning system described that uses Sensonomy.

Discussion:
Sounds like there are going to be a few papers that better describe Sensonomy. This was only a small tidbit of information but from the sounds of it Sensonomy is going to help with cell phones and their sensing features. I think this has the potential to contain some interesting documents and hopefully I'll get the chance to read them.

Monday, March 22, 2010

Inmates are Running the Asylum - Part 2

Book: The Inmates are Running the Asylum Ch. 8-14

Author: Alan Cooper

Comments: Zach

Summary:


The last half of this book started by describing the way a programmer thinks. Since they are usually working alone in a cubicle they think that they are somewhat superior to others because they don't have to answer to anyone while programming. He also mentions that programmers try to reuse code and the code that they are trying to reuse will become the basis for whatever it is that they are working on. There is also a scenario explained to show how Microsoft has a culture where they have bright programmers who end up running the show whether they think they are or not.

The next three chapters started by talking about personas that programmers create in order to build a program to fit each personas needs. Ultimately they want this to end up being something that is created for one person, or a primary persona, and have every other persona be happy with what is designed as well. Then he goes on to explaining different goal types and the multiple goals that coincide with them. Chapter 10 also mentions how software needs to be "polite" and basically be like a human. Chapter 11 shows different scenarios that a programmer has to think about when designing something for a persona. It needs to have a precise vocabulary so that communication can be more effective between human and computer.

The last three chapters start by talking about how programs are tweaked until they work and how they test their product with the public (such as focus groups). Then it delves into a more "how do you know who to listen to" sort of spin where programmers have to decipher which opinion would truly be best for whatever it is they are designing.

Discussion:
I think the second half of the book was a little easier to read because I wasn't getting as mad at the book as I was with the first half. There are certain portions that I still see in some programmers (i.e. I work alone and therefore am better than you) but I think that companies are now trying to create products that their consumers are looking for. I've currently looked into Consulting jobs and I think that they are truly trying to figure out what their customers want and are working with them in almost every step of the design process. Since this book is rather old I think he has hit some points that were relevant back then, but things have changed since then so hopefully we've learned from the mistakes Cooper has described in his book.

Saturday, March 20, 2010

Placement-Aware Mobile Computing

Paper: Lightweight Material Detection for Placement-Aware Mobile Computing

Authors: Chris Harrison and Scott E. Hudson

Comments: Ross

Summary:
This paper talked about a sensor that could be placed on a mobile device in order to tell where it has been placed. Not only can mobile devices detect where in the world it is, the sensor could help decide where it was placed (i.e. in your hand, in a pocket or purse, or placed on a desk). This could help with a cell phone by helping the battery live longer by detecting when the screen really needs to be on. If the cell phone is in a pocket or purse then there is no reason for the screen to be showing what time of the day it is. If it is set on a desk there could be a need for that clock to be showing. The sensor could also have the capability to tell if it should use a ringer, or if the phone should do a vibration alert. When a phone is placed on a desk and it knows it is at a workplace then it could vibrate to alert the owner that someone is calling or sending a text message. If the phone is deep in a purse or in a pocket and is not at a work place then it could do an audible noise in order for the owner to realize someone is calling. In order to identify where the mobile device is located the sensor uses different color LED elements to accurately measure the reflective properties of certain materials. The pictures below show the sensor and the amount of light certain materials give off.

In order to identify materials they used a naive Bayes classifier which was trained on five different trials to evaluate the classification accuracy. The experiment to see how accurate this sensor was was tested on twenty seven sample materials twice a day for three days. Each time something was detected points would be recorded and at the end the naive Bayes classifier was trained in order to figure out the total accuracy. There were sixteen participants that were recruited through an email for this experiment and they were asked to fill out a questionnaire about where they usually place their mobile devices. They started to gather information from the participants at their homes and work places and their end results are shown in the picture below.


They had an overall accuracy of 94.4% with detecting of placement with an average of 4.3 placements out of an average of 5.5 total placements.

Discussion:
I think this would be very useful to have because there are many cell phones out their that eat battery power because they unnecessarily display things on the screen when it would be best to have the screen stay completely off. I think this could help with saving battery life and could be useful to have yourself automatically turn to vibrate or silent when you walk into work or some place where your phone is not supposed to be on. I think future work could be to implement the phone switching modes for whenever you are in a classroom and your phone is not supposed to be on. Many people keep their phones in their backpacks or pockets when in class, and the way this sensor would be implemented their phone would actually being ringing instead of vibrating whenever they get an incoming call or text message. It would be more beneficial to some people to have it turn itself to vibrate when in class, and change back to a ringer whenever they step outside of the classroom.

Friday, March 19, 2010

Flexible Input Devices

Paper: Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles

Written by: David T Gallant, Andrew G Seniuk, and Roel Vertegaal

Comments: Aaron

Summary:

This paper presents a new type of input that deals with foldable and flexible paper that has infrared reflectors placed on it. There are many different inputs that this device could be used for, and each movement is tracked by a webcam that tracks the infrared movement with a ring of infrared LEDs around the lens. This device acts like a mouse and the deformation of the paper is what describes each input that can be conducted. The computer that is tracking this device is using OpenCV, OpenGL, and a C++ program in order to have real-time deformation available.

The above picture shows some of the different actions the paper could do in order to input something onto the computer. All of the motions correspond to a specific action that one would normally do with a mouse, but instead of having one form of input you now have multiple ones.

Discussion:
I don't think I really find this to be very useful because it would take a very long time to remember each different motion that corresponds to what action I want to accomplish. I think that the public may shy away from this because it is not very intuitive and the simple ease of a mouse is turning into something that doesn't seem very easy at all.

Tuesday, March 16, 2010

Taskpose

Paper: Taskpose: Exploring Fluid Boundaries in an Associative Window Visualization

Author: Michael Bernstein, Jeff Shrager, and Terry Winograd

Comments: Zach

Summary:
This paper introduced the idea of classifying tasks from the binary model that has been used to a more fuzzy based model in order to arrange windows in groups based on their associations with each other. Previous task based systems seemed to force the user into identifying certain tasks and it was not easy for them to use. This paper introduces Taskpose which is a collection of algorithms that help arrange windows by importance (most used or looked at) and by their association with each other; such as the user goes back and forth to two different windows because essentially they are related to each other for whatever purpose the user is looking at them for.

As seen in the picture above, Taskpose uses thumbnails to represent the different windows that the user has opened. To better explain how Taskpose works they have provided a diagram of what could happen which is shown below.


Taskpose contains three different algorithms in order to group together windows. The first algorithm is the WindowRank algorithm that determines window importance. It is based off of Google's PageRank system in order to tell that the user has used a certain window longer than another. The second algorithm is the window relationship algorithm that determines what windows are associated with other windows. It starts by having each window maintain a ratio of switches from itself to every other window and then each window is weighted by this ratio to form the relationship. The last algorithm is the spring-embedded graph layout algorithm which actually lays out the thumbnails for the user to look at.

In order to test this they used ten undergraduate students that were to use Taskpose for at least one hour every day for a week so that the results were with natural data. They found that the participants used Taskpose in their every day computer use for longer than one hour each day, and many participants said they would love to continue using it because it made their everyday usage a lot easier.

Discussion:
I think this could help save the hassle of having multiple windows opened on a task bar to where you can't even see all of your windows without going down and scrolling over the browser icon. This might also help minimize the use of different tabs across one browser. I think that by having the windows clustered together by importance will also help some people get tasks done faster because all of the windows are there in view without having to search through the task bar to find the windows. One participant would have liked the windows to have docked to the bottom similar to the taskbar and I think the future work that they mentioned has already been created with the Windows 7 docking of the internet browser to the top of the screen.

Map-Based Storyboards

Paper: Creating Map-based Storyboards for Browsing Tour Videos

Author: Suporn Pongnumkul, Jue Wang, and Michael Cohen

Comments: Ross

Summary:
This paper focused on taking long tour videos and allowing a user to shrink it the way the want it in order to make the tour video more fun. Often tour videos are long and boring and nobody wants to watch them, so it's almost a waste of time for someone to film them because they almost never show them to people. This application will allow the users to take a long video and extract the "highlights" of it in order to shorten the film so that other people will watch it. The idea was to have a map-based storyboard to go along with the video so that the user could interact with the video. They developed an interactive authoring tool, and there was a viewing tool that you could actually watch the video on. The system took in a video and a map and then the user would process it by using the authoring tool. Once the user has edited the movie in the way that they want the edited portion is passed as an XML configuration into a web-based viewing tool. During this entire process the user is not actually editing the original video so there is no self-destruction going on. The video is still there in case something goes wrong in the editing portion and the user wishes to go back and start over.

The picture above is what the authoring tool looks like. It has the video on the top left corner and then the storyboard map is on the right hand side. The pictures and pin points on the map correspond with each other, and you can sort of see what the pathway looks like. After the video has been edited with this tool, it can then be uploaded into the viewing tool.

This pictures shows what the viewing tool looks like whenever a video is uploaded. The way that these tools were evaluated was by having a group of people answer a few questions about each tool so that the authors could get some input on how the interaction with the tools was. The participants were able to understand them rather quickly and they went through each playing option and the directions that were given to them for the experiment. Most participants preferred the "Playing Highlight" and "Map Navigation" modes because they saved time and showed the whole story. The authoring tool wasn't actually tested on because that would have required the participants to actually film their own videos and edit them themselves. These tests were conducted on videos that were already edited so that the could see how people liked the viewing tool and storyboard that went with the video.

Discussion:
I thought this would be very interesting to actually use because lots of people have videos that they would like to edit whether they be videos of trips somewhere or a wedding video that they would like to shorten to hit the highlights of the time spent. It would have been nicer to see if the authoring tool was easy for people to get used to since that seems to be the entire point of this paper, but knowing that the viewing tool is easy to use will help them in their future work. I think tests need to be done on the authoring tool before one can say that this application would be useful.

Thursday, March 4, 2010

Rating vs. Personality Quiz based Preference

Paper: A Comparative User Study on Rating vs. Personality Quiz based Preference Elicitation Methods

Written by: Rong Hu and Pearl Pu

Comments: Jacob

Summary:
This paper compared a rating based recommender system with a personality quiz based recommender system. Both systems rated movies and made suggestions to the users. The rating based system was called MovieLens and the personality quiz system was called Whattorent. There were 30 participants that took part in this study.

The graph shoes the differences between all of the participants. The recommendation system interface for MovieLens had to be changed up a little bit to match with Whattorrent in order to only recommend one movie at a time. The way they tested the similarity and which system the user likes better was by:
  1. Perceived Accuracy: measured by the participants' ratings to the recommended movies and their responses to the post-questionnaire at the end of the study.
  2. User Effort: calculated how much time people contemplated their answers and how much effort the user feels that they have used by doing the rating or quiz questions.
  3. User Loyalty: asked the participants if they would be likely to recommend a system to their friends.
For perceived accuracy the the participants were asked a question stated positively, and the same question stated negatively. In this section Personality quiz-based system scored higher than the rating system.

This graph shows the result's of the users'ratings on recommended movies. The people who were given recommendations through the personality quiz-based system were more interested in the movies recommended to them then the rating based system.

The graph above shows the effort the participants felt like they were giving when taking part in either system. Most users felt that the rating system took more effort than the quiz system because they didn't feel like they could accurately evaluate a movie that they hadn't seen in a while. They couldn't remember the plots of all of them, and when other people only gave the movie an average rating they weren't sure how to rate them.

This last graph shows how likely the participants would be to recommend the system to their friends to use. The personality quiz-based system had more loyalty then the rating system did. This paper concluded that the personality quiz-based system has the potential to be a great tool and an alternative to the existing methods.

Discussion:
I think this is rather interesting because people are always trying to make recommender systems better so that users will be happy with what movies they are recommended. I think that if the personality quiz is going to have better accuracy, and the users feel like they don't have to much effort into it then they will probably use it more. Since they couldn't really say which system was the best I think that the future may hold a better combination of the two systems for users to use.

Virtual Game Control

Paper: Hand Gesture Recognition and Virtual Game Control Based on 3D Accelerometer and EMG sensors

Written by: ZhanXu, Chen Xiang, Wang Wen-hui, Yang Ji-hai, Vuokko Lantz, and Wang Kong-qiao

Comments: Aaron

Summary:
This paper compared the EMG sensors and ACC sensors to detect hand motions in order to interact with a computer. EMG is a hands-free application that detects muscle movement, while ACC is an accelerometer that can be easy to wear and helps to figure out the hand motions the user is inputing. Each signal from the ACC and EMG were processed in real time and the motions were then translated into the interactive system that they had created.


There were only 5 subjects for this test and there were two different tests in order to calculate the accuracy of the systems. Their first test was created in order to just test the system's recognition of each hand gestures. The second test dealt with a Virtual Rubik's Cube puzzle that they had created. Each test had three different conditions. The first condition was conducted with just EMG sensors, the second condition was just ACC sensors, and the third condition combined the two sensors. Each participant practiced the gestures ten times each for each test and they found that the average recognition accuracies for the 18 gestures is shown in the graph below.

The results with the combination of EMG and ACC ended up with a 91.7% accuracy. The EMG sensors which are the ones on the participants wrists didn't do as well as the ACC sensors which were located on the persons' head.

Discussion:
I thought this was pretty interesting because there could be great virtual games that could come from this. It would be a lot of fun to be able to play around with this technology and I know that Nintendo's Wii already has a movement recognition system with their controllers and I think that is why the Wii has so much success. For future work they mentioned wanting to make this more robust and I think that may help the accuracy a little bit more. I think with further research on this they will end up with something that will be really good for the gaming industry.

Tuesday, March 2, 2010

Emotional Design


Book: Emotional Design by Donald A. Norman

Comments: Eric

Summary:



For this book I couldn't really pinpoint whether I liked it better than the first one. I felt that he hit a majority of the same topics, but instead of bashing design like in the first book he was actually promoting the designs of products in this one. It was all about emotion and how someone feels when using a product. He mentions that there are a lot of items that may not have a function, but because they look cool and are pleasing to people that it will still get bought and used in some way. I think the key points he mentions are the three different levels of the brain.
  1. Visceral: the automatic, prewired layer which is the most basic of the levels - How stimulated we are with the object.
  2. Behavioral: controls everyday behavior and learned behaviors - How we interact with an item.
  3. Reflective: the contemplative portion of the brain - Why we like something.
It felt as though the book ranted on about these different levels and how our emotions make us want to use a product versus not wanting to use a product. If it was aesthetically pleasing then people were most likely going to use it, but if it was "ugly" or not pleasing then people would not use it even if it worked perfectly. The design of products should make us happy so that we will want to buy and use them is the basic theme of this book. Then the last few chapters he seems to put a spin on how robots are our future and kind of goes crazy.

Discussion:
Again I really wasn't sure what I thought about this book. I feel that we knew a lot about this already and that yes if I am pleased with the way something looks I am more than likely going to use it. I guess I could relate this to certain websites or communities. If I don't like the way the website looks I am more than likely not going to use it. Same goes with products for my home. If I am not pleased with the way something looks then I do not want it in my house. The future also will probably consist of a lot of the robot things that Norman talks about, and to be honest a few them have already been built and in use today. I think that much of our world is going to become almost completely digital and we're not going to have to work very hard. As long as we all don't turn into those people from Wall-E I think we will be ok.

Thursday, February 25, 2010

Meeting Summarization

Paper: Improving Meeting Summarization by Focusing on User Needs: A Task-Oriented Evaluation

Written by: Pei-Yun Hsueh and Johanna D. Moore

Comments: Brandon

Summary:
This paper conducted a study on recorded meetings and how fast they could extract the decisions made in the meetings. The goal was to make a good meeting browser so that people could see summaries of the meeting and figure out the decisions that were made in each meeting. There were 35 participants where 20 of them were female and 15 were male. They filled out a questionnaire about their prior experience with computer usage and meetings, and then they had to analyze four meetings and summarize what decisions were made in the meetings. They had to write a brief summary of the decisions in 45 minutes using the browser interface. The meeting browser had audio capability so that every participant could listen to the meeting or select portions to listen to. The browser also had four different summary displays.
  1. Baseline (AE-ASR): automatic general purpose extracts with automatic speech recognition
  2. AD-ASR: automatic decision focused extracts with automatic speech recognition
  3. AD-REF: automatic decision-focused extracts with manual transcription
  4. Topline (MD-REF): manual decision focused extracts with manual transcription
The graph above are the results from the analysis of the summaries that each participant created from the different browser options. It shows that on average the participants that were given the decision-focused extractive summaries were able to hit each decision point much better than the general purpose one. They again tested this to see why there was a difference with an analysis of variance. The decision-focused displays seemed much easier to use to the participants and they thought it was less demanding as well.

Having the display option for the user was beneficial to all participants in achieving their task. It was much easier for the participants to go through the decision summaries in order to figure out the decision points then it was to go through the extractive summaries that were usually rather long. Only a few participants actually went through the audio-video recordings that were made available in order to find a few more of the decisions. The conclusion that the paper came to was that they couldn't conclude how efficient their browser was since the baseline decisions were still of high quality. The topline decision-focused participants were able to go through the task much quicker, but since all participants concluded around the same decisions they could not pinpoint the efficiency of the browser. Even when the users encountered 30-40% inconsistency with the data from the actual meeting being translated into the extracted summary they were still able to come up with what the decisions were. So basically their browser worked well for either scenario, but they couldn't tell us how efficient it was.

Discussion:
This paper was semi-interesting and more just confusing. For some people it may be nice to have this to look back on a few things they were uncertain of or allow people that could not make the meeting see what the meeting was about, but I think that with the way businesses are being run that anyone could attend a meeting from wherever they were and this wouldn't be an issue. I think this browser could be improved and made more useful for a business to use if they found a way to highlight the decision points without someone having to go through the meeting and picking the decisions that were made. I wish I understood their methods a little bit better though and hopefully after I read it again for the presentation I can talk more about it.