Transcript
The Effect of Latency on Rescue Robot Control Graeme Smith Supervised by: James Gain Department of Computer Science, University of Cape Town November 3, 2008 Abstract This thesis describes WiiRobot, a system built to control a rescue robot using components from the popular Nintendo Wii gaming console. The WiiRobot control system was used to conduct a series of experiments aimed at determining the effect of delay on the ability of users to control a rescue robot. Users were tasked with using a rescue robot to locate victims and navigate to waypoints in a simulated environment. It was found that delay has a statistically significant effect on the ability of users to avoid collisions with obstacles, but has no significant effect on their ability to locate victims. In addition, delay significantly affects users’ navigation to specified waypoints, both in terms of time and distance travelled. It was also found that there is no significant difference in performance based on whether the delay is constant or variable.
i
Table of Contents
Abstract ............................................................................................................................... i Table of Contents ............................................................................................................... ii Table of Figures ................................................................................................................. v 1. Introduction.................................................................................................................. 1 1.1 Problem Outline ................................................................................................. 1 1.2 Proposed Solution .............................................................................................. 1 1.3 Aim of research .................................................................................................. 2 1.4 Report Outline .................................................................................................... 2 2. Background .................................................................................................................. 3 2.1 Search and Rescue/Explorer Robots .................................................................. 3 2.2 The effects of latency/delay ............................................................................... 4 2.2.1 Latency in Teleoperated Robotics.......................................................... 4 2.2.2 Latency in Computer Racing Games ..................................................... 5 2.2.3 Latency Mitigation Techniques.............................................................. 6 2.3 Effective metrics for analysis ............................................................................. 7 3. Design and Implementation ......................................................................................... 8 3.1 System Structure ................................................................................................ 8 3.1.1 Command Passing Structure .................................................................. 8 3.1.2 WiiRobot System ................................................................................... 9 3.1.3 Wii System ........................................................................................... 10 3.1.4 WiiuseJ ................................................................................................. 12 3.1.5 USARSim............................................................................................. 13 3.1.6 Unreal Engine ...................................................................................... 13 3.2 Driving the robot .............................................................................................. 14 3.2.1 Commands ........................................................................................... 14
ii
3.2.2 Mathematics of motion ........................................................................ 14 3.3 HUD Design ..................................................................................................... 16 3.3.1 Laser Scanner ....................................................................................... 16 3.3.2 Camera Pan/Tilt ................................................................................... 17 3.3.3 GPS ...................................................................................................... 17 4. Experimentation ......................................................................................................... 19 4.1 Design .............................................................................................................. 19 4.1.1 Introduction .......................................................................................... 19 4.1.2 Hypothesis ............................................................................................ 19 4.1.3 Tasks .................................................................................................... 20 4.1.4 Variables .............................................................................................. 21 4.1.5 Materials............................................................................................... 22 4.1.6 Experimental Schedule ........................................................................ 23 4.1.7 Participants ........................................................................................... 23 4.1.8 Ethical Considerations ......................................................................... 24 4.1.9 Procedure ............................................................................................. 24 4.2 Simulation of delay .......................................................................................... 25 4.3 Map design ....................................................................................................... 25 4.4 Results and Analysis ........................................................................................ 27 4.4.1 The effect of latency on locating victims ............................................. 28 4.4.2 The effect of delay on collision avoidance .......................................... 29 4.4.3 The effect of delay on flipping avoidance............................................ 30 4.4.4 The effect of delay on waypoint navigation......................................... 30 4.4.5 The effect of delay on efficient navigation .......................................... 31 5. Conclusion and Future Work ..................................................................................... 33 5.1 Summary of work............................................................................................. 33 5.2 Impact of results ............................................................................................... 33
iii
5.3 Future work ...................................................................................................... 34 6. References.................................................................................................................. 35 7. Appendix.................................................................................................................... 39 7.1 Waiver .............................................................................................................. 39 7.2 Pre-experiment Questionnaire .......................................................................... 41 7.3 Post-experiment Questionnaire ........................................................................ 43 7.4 Screenshots of WiiRobot system ..................................................................... 46 7.5 Instructions given to participants ..................................................................... 47
iv
Table of Figures Figure 1 - Examples of rescue robots. ............................................................................... 1 Figure 2 - Using a robot controlled by a form of head tracking ........................................ 4 Figure 3 - Graph showing how performance in racing games........................................... 6 Figure 4 - Inter-system Command Passing Structure ........................................................ 8 Figure 5 - The WiiRobot System ....................................................................................... 9 Figure 6 - Wii Nunchuck and Wiimote ........................................................................... 11 Figure 7 - Drumming robot which uses Wiimotes to control its arms ............................ 12 Figure 8 - Example GUI showing data received using WiiuseJ ...................................... 13 Figure 9 - Nunchuck showing angles .............................................................................. 14 Figure 10 – Skid-steered Driving Graph. ........................................................................ 15 Figure 11 - Camera Tilt and Laser Scanner displays from HUD .................................... 17 Figure 12 - GPS and compass display from HUD ........................................................... 18 Figure 13 - The Red Arena from RoboCup Rescue ....................................................... 20 Figure 14 - Maps .............................................................................................................. 26 Figure 15 - Box and whisker plot showing outlier .......................................................... 28 Figure 16 - Graph showing average seconds per collision, with standard deviations ..... 29 Figure 17 - Box and whisker plot showing outliers ......................................................... 30 Figure 18 - Graph showing average seconds per waypoint, with standard deviations .... 31 Figure 19 - Box and whisker plot showing outliers ......................................................... 31 Figure 20 - Graph showing average distance ratios, with standard deviations ............... 32 Figure 21 – Pool balls. ..................................................................................................... 48
v
1. Introduction 1.1 Problem Outline Rescue robots are fast becoming an essential part of any search and rescue mission as they allow rescuers to explore locations that are either too dangerous or too difficult for humans to get to. They can be driven into buildings filled with toxic gases, or lowered down a small opening in the rubble in order to seek out victims. The ability to efficiently deploy these robots is essential, as every second counts in these life or death situations. Many control systems designed to date are cumbersome and take time to set up, which leads to longer deployment times. We seek to develop a system, using off the shelf equipment, to drastically reduce deployment time, without compromising the control system of the robot.
Figure 1 - Examples of rescue robots. Left: Talon, used at the World Trade Center in 2001 [4]. Centre: The controller for Talon [4]. Right: P2AT, the robot used in this project.
1.2 Proposed Solution We propose using the Nintendo Wii gaming components to control the robot. The standard Wii system contains, amongst other items, a Wiimote and a Nunchuck, which we have used to develop a control system for the robot. The Wiimote has a built in accelerometer, which was used to implement head tracking. The head tracking was used to control the camera mounted on the robot. The Nunchuck also has a joystick, which was used to drive the robot, in much the same fashion as a joystick would be used to drive a remote control car or a car in a computer racing game. 1
We made use of USARSim (Urban Search and Rescue Simulator) to test the control system. USARSim was also used to test the effect of delay on the ability of the user to efficiently and accurately control the robot, as this an important issue in the use of rescue robots.
1.3 Aim of research The aim of the research in this project was to determine the effect of delay on users controlling a simulated rescue robot. We hypothesised that an increase in delay would have a negative effect on the ability of users to locate victims, to avoid collisions and to navigate quickly and efficiently to waypoints in a map. Furthermore, we hypothesised that introducing variable delay, as opposed to constant delay, would have an even greater impact on users.
1.4 Report Outline A background to the various problems addressed by this project is given in Chapter 2. The design of the system is then discussed in Chapter 3. Chapter 4 covers the experiment design and results, as well as the analysis of those results. This thesis closes with conclusions and future work in Chapter 5.
2
2. Background 2.1 Search and Rescue/Explorer Robots Robots are increasingly utilised in search and rescue missions as they enable rescuers to search areas that are either too dangerous for humans or too difficult to get to, such as a gas pipe, a small hole, or unstable buildings. The first major deployment of rescue robots was in the aftermath of the 9/11 attacks at Ground Zero [4]. The robots were equipped with video cameras and microphones and were used to, firstly, seek out survivors buried under the rubble and, later, to locate the bodies of victims for retrieval. The bodies of 10 victims were located through the use of teleoperated robots. Other areas of use for search and rescue robots include bomb, earthquake and fire sites. Robots can also be used for exploration, either in a military capacity or in other situations, such as space exploration. In a military capacity, the robot could be used to drive into enemy territory to act as a dispensable spy. The United States Defense Advanced Research Project Agency’s (DARPA) Tactical Mobile Robot Program is in the process of building semi-autonomous robots for use in difficult or risk-intensive tasks [24]. In space, the robot could be deployed, on Mars for example, and used to relay footage and collect samples for later examination. Of particular interest to researchers is the manner in which robots of this sort are controlled. Robots will, in general, be teleoperated from a command centre. This could be a building, a tent or a vehicle. The robots used in the 9/11 aftermath were controlled by joysticks [4] whereas other systems utilise body and/or head tracking, where the movement of the user’s head is tracked and used to control either the robot or the camera mounted on the robot [14].
3
Figure 2 - Using a robot controlled by a form of head tracking [5]
2.2 The effects of latency/delay Latency, used interchangeably in this project with “delay”, is defined as the time delay between the issuing of a command and receiving a visual response. The cause is usually the transmission of information across a network [5]. A user will issue a command to the robot (“move forward”, for example) and they will only see, on the display, that the robot has moved forward, after a certain period of time. There can be delay in the transmission from the user to the robot, from the robot to the user, or both. 2.2.1
Latency in Teleoperated Robotics
In their paper on human performance issues for teleoperated robots Chen et al. discussed how sensitive users are to latency [5]. Users can detect latency as low as 20ms and, with delay times rising above 1s, they tend to adopt a “move and wait” strategy in controlling the robot. That is, due to the high delay, users were not confident enough to move continuously for long periods of time. Thus they would move for a shorter period of time, and then wait for their display to “catch up” with the robot's. Ferrell and Sheridan found that latency has a profound impact on users’ ability to control a robot efficiently [11]. In fact, the effects found were such that the resulting time increases were well in excess of the increase in the amount of delay. It should be noted that Ferrell and Sheridan concerned themselves with latencies in excess of a second, while this project considers a range of delay times below 150ms. These figures are more in line with those 4
used by Kay and Thorpe [21], in their study of low-bandwidth, high-latency teleoperation; and Muzzetta and Cristaldi [27], in their report on robotic assets controlled via wireless LAN, the control medium currently slated to be used for the physical robot this project is designed for. Lane suggested that a short variable delay is more detrimental than a longer constant delay [23] (see below for further discussion of variable delay). It was speculated by Draper et al. [9], and subsequently shown by Kaber et al.[20], reporting on users controlling a tele-robot arm to pick and move items to specified locations, that delay has a detrimental effect on telepresence, which, in turn, has a detrimental effect on user performance. 2.2.2
Latency in Computer Racing Games
A large portion of research into the effect of delay on driving ability has been in the area of computer gaming. This is still applicable to the purposes of this project, as the act of driving the robot can be abstracted such that the user can be regarded as driving the robot in a racing game. Research into this area has shown that small amounts of delay have little effect on the overall performance of the player. Figures of 60ms [31], 50ms [29] and 100ms [3] were quoted as the threshold values above which performance is affected. Quax et al., investigating the effect of delay and jitter on players of first person shooter, Unreal Tournament 2003, also found that jittered delay had a negative effect on participants in a real-time gaming environment [31]. It is also generally accepted that as the complexity of the task increases, so does the effect of latency [3, 26]. Thus, moving in a straight line between two points will not be greatly affected by delay but moving through a tight maze will be affected significantly. Users tend to perform better when they are immersed in a game/environment. Thus, a system that maintains a sense of immersion would provide better results than one which does not. Chen et al. hypothesised that, in online games, the sense of immersion is 5
inversely proportional to the rate at which players leave games [6]. They analysed data from online game servers and found that departure rates increased as latency increased and as network variation, or jitter, increased. This shows that a player’s sense of immersion is affected by network delay issues. Immersion, however, cannot always be measured empirically, and subjective impressions have to be taken into account. Pantel et al., investigating the effect of delay on multiplayer racing games, found that players considered delays of over 100ms to produce unrealistic game play, which leads to a lack of immersion [29].
Figure 3 - Graph showing how performance in racing games is extremely dependant on latency [7]
2.2.3
Latency Mitigation Techniques
Depending from where in the system the delay emanates, certain methods can be employed to mitigate its effects. The problem with most methods is that they are tailored for virtual environments and, as such, would not work for our live, video-feed based system. Pantel et al. suggest methods of controller prediction [30]. Three methods are implemented: constant control position, constant control velocity and constant control acceleration. In each of these, the respective attribute is assumed to have remained constant and this command is carried out until a new command arrives.
6
Another way to mitigate the effects of lag is to increase the level of automation of the robot. Luck et al. found that the higher the level of automation, the lower the effect of lag [26]. For example, a robot could be instructed to move to a waypoint but must find its own way there. He showed that this approach yielded better results under delay than manually driving the robot to the required waypoint.
2.3 Effective metrics for analysis The system being built has to be analysed by some measure. It can either be evaluated based on empirical experimental results or by subjective ratings and, indeed, both approaches are employed most latency research. Dick et al. used a Mean Opinion Score (MOS) to evaluate users’ online game play experience [8]. Players were asked to rate the experience on a Likert scale, from one to five, with a score of one representing an unacceptable, impossible game play experience and five representing a perfect environment. Steinfeld et al. have put forward some very useful metrics for empirically analysing human-robot interaction [35]. Robot interaction was divided into five broad categories: Navigation: How the robot moves from point A to point B. Perception: How the robot or user understands its environment. Management: How tasks are coordinated in teams of robots and humans. Manipulation: How the robot interacts with its environment Sociability: How the robot performs tasks requiring significant social interaction. In particular, the metrics for measuring navigational tasks are pertinent to the analysis of control of rescue robots. Steinfeld et al. give several potential metrics for this category, of which the following are applicable to this project: Deviation from planned route, time to complete task, obstacle avoidance and percentage of navigational tasks successfully completed. These allow one to quickly compare the effects that various factors (such as latency) or different interaction systems have on users.
7
3. Design and Implementation 3.1 System Structure We have made use of various open source components in the implementation of this project. What follows is a brief summary of each of the components, and how they fit into the system as a whole.
3.1.1
Command Passing Structure
The way in which commands are passed between the systems is illustrated in the diagram on the right. All data from the Wiimote is
received
and
handled
by
WiiuseJ. Events generated by WiiuseJ are received by our system, WiiRobot. This, in turn, sends instructions to USARSim, which communicates with the Unreal Engine. Note that all data flow is two-way. For example, WiiRobot
can
both
receive
information from WiiuseJ and send instructions, such as to tell the Wiimote to activate its rumble
Figure 4 - Inter-system Command Passing Structure
mechanism.
8
3.1.2
WiiRobot System
WiiRobot, the system designed and built in this project, is designed to work as an intermediary between the Wii components and USARSim, the simulator used in place of a physical robot. The decision to utilise a simulator is necessitated by the limited access to the physical robot during the course of this project. In addition, constant use and manipulation of the physical robot, which is still in its testing stages, would cause costly damage to the robot, both in terms of hardware and time. A decision to not use a simulator would also have necessitated the building of physical mazes for testing, which, while interesting, are not in the scope of this project. Thus a simulator is used as it provides quick, easy to test and semi-realistic environment for testing. WiiRobot achieves its role as an intermediary by receiving commands, as events, from the WiiuseJ package, analysing these commands, and thereafter sending new commands to the USARSim system. For example, if the joystick is moved on the Nunchuck, an event will be generated by WiiuseJ. WiiRobot will handle this event by converting the angle and magnitude of the joystick movement into speeds for each robot tread (see below for exact details of the skid-steering equations). The design of WiiRobot is primarily geared towards the experiments that had to be carried out. It would, however, be a trivial task to adapt the system for most purposes in the realm of robot control and experimentation.
Figure
5
(right) shows a screenshot of the system in action, whilst further
screenshots
of
the Figure 5 - The WiiRobot System
various menus and experiment panels can be found in Appendix 7.3. 9
3.1.3
Wii System
Since the release of the Nintendo Wii gaming console in 2006, numerous projects have developed alternative uses of the innovative controllers introduced by the Wii gaming system. 3.1.3.1 Overview of the Wii system The Wii remote, or Wiimote, uses a 3 axis accelerometer and an infrared-sensing camera to position itself in space. This allows it to use gesture-based controller. When using the gaming console as intended, a sensor bar containing infrared lights is placed near the television screen. The Wiimote will then track these lights to calculate its position in space. It uses a Bluetooth wireless link to communicate, which allows it to be used with other Bluetooth devices [25, 32]. One of the major issues with using an accelerometer is that it cannot detect lateral motion (motion perpendicular to the gravitational pull). This means it can only detect its position in the “up-down” plane of motion, but not in the “left-right” plane. In order to detect this motion, the sensor bar has to be used [16]. Nintendo is releasing a gyroscope add-on, called Motion-Plus, which will plug into the back of the Wiimote and allow full motion detection and tracking [13]. This, however, was not available at the time of writing. The Wiimote can also provide limited haptic feedback through the built-in rumble mechanism, which uses an unbalanced weight to produce vibrations [32]. The Wiimote has an expansion slot into which other devices, most commonly the Wii Nunchuck, can be plugged. The Nunchuck is an extension which is usually sold with the Wii system. It is held in the user’s other hand and incorporates two buttons and a joystick.
10
Figure 6 - Wii Nunchuck and Wiimote
3.1.3.2 Previous work with the Wii system Sreedharan [34] utilised the Wiimotes to allow users to more realistically interact with the online virtual environment, Second Life. The Wiimote could be moved in certain “gesture-motions” in order to direct the user’s in-game character to perform an action. For example: moving the Wiimote in an up and down motion produces a nodding head action, whilst moving the Wiimote from side to side produces a head shaking action. Shirai et al. [33] developed a whole suite of applications all utilising the Wii system [33]. These applications include a racing game, a painting utility and a fencing game. They noted that the system did not provide sufficient force-feedback for completely realistic control in all situations. For example, there is nothing stopping a user, playing the racing game, from turning the Wiimote very quickly through ninety degrees. Whilst this may be possible for the user, it is not possible for the racing vehicle to turn as quickly, or even as much. Thus presence breaks down as there is a discrepancy between the user’s actions and the result. In the field of teleoperation of physical entities Guo et al. [16] used the Wiimotes to control the navigation and posture of an AIBO robot dog. Navigational commands such as “walk forward”, “strafe right” and “rotate left” were all issued by combining the
11
positions of two Wiimotes, whilst postural positions such as “left paw up” linked to actions such as raising a Wiimote up above the head. Similarly, Filippi [12] used it to control robotic arms for use on a space station. Both of these projects encountered limitations arising from the accelerometer. Guo et al. chose to constrain their motion analysis to include only the pitch of the Wiimote whilst Filippi chose to use the D-pad on the Wiimote to allow control of left-right motion. Another project to use the Wiimote to allow arm control of a robot was that of Gams et al. [13] where the user’s arm motion was captured using Wiimotes and conveyed to a robot. This is illustrated by their building of the “drumming robot” shown below.
Figure 7 - Drumming robot which uses Wiimotes to control its arms
One of the first developers to conceive and publicise the possibilities presented by the Wii system for head-tracking purposes was Johnny Chung Lee [25]. His discoveries were made popular by the videos he released on online video sites, such as youtube.com, demonstrating the applications he had developed. These include head-tracking as well as finger and other object tracking, and an interactive whiteboard. 3.1.4
WiiuseJ
WiiuseJ was developed by Guilhem [15] and is an adaptation of the original Wiiuse API developed by Laforest [22]. It provides an API for Wiimotes to communicate with the computer. It allows developers to access information from the Wiimote such as accelerometer data and button events as well as control functions such as rumbling the 12
Wiimote. WiiuseJ was chosen due its being written in Java, the language the development team was most comfortable in. An alternative would be the original Wiiuse API, which is written in C.
Figure 8 - Example GUI showing data received using WiiuseJ
3.1.5
USARSim
The Urban Search and Rescue Simulator, USARSim, is a high-fidelity simulation, built on top of the Unreal Engine game engine [1]. It was designed to be a simulation environment for Urban Search and Rescue (USAR). It provides several robot models, as well as the framework for further development of other models. Maps depicting those used at RoboCup Rescue are distributed with the system, and new maps can be designed using Unreal’s Unreal Editor. USARSim allows conceptual robots to be designed and tested at low cost, without the need for physical implementations. It was chosen as it is highly regarded by the rescue robot community, and is used as the framework for the RoboCup Virtual Robot Competition [1]. 3.1.6
Unreal Engine
The Unreal Engine was developed by Epic Games [10, 37]. Its most popular use has been in the development of the first person shooter game, Unreal Tournament. It provides 3D rendering, the Karma physics engine and the Unreal Editor, a 3D authoring tool which assists developers in building maps and terrain. 13
3.2 Driving the robot 3.2.1
Commands
The robot is driven using the Nintendo Wii Nunchuck extension’s joystick. This allows a high affordance, as many users are familiar with driving either a remote-control car or a simulated, racing-game car, using a joystick. Pressing forward on the joystick moves the robot forward, pressing backwards moves it backwards. Pressing the joystick to the left will rotate the robot on the spot to the left. Any combination of forward/backward and left/right motion is combined to allow sensible control of the robot.
3.2.2
Mathematics of motion
The robot is of a type called “skid-steered”. This means that it has two treads, left and right, which turn independently of each other. The robot is driven by providing a speed to each of its treads. To move the robot forward, for example, the maximum speed is sent to both the left and right treads. To move it backwards, the negative maximum speed is sent to both. To rotate the robot to the left, a positive speed is sent to the right tread and the additive inverse of that speed is sent to the left. The direction given by the joystick is provided as an angle, with
Figure 9 - Nunchuck
forward being 0º. This angle must be converted into a speed for
showing angles
the left and right treads to provide sensible motion.
14
The following equations were derived for driving the robot based on the angle received from the Nunchuck: Θ = Angle_of_nunchuk_joystick; if (Θ < 180º) left_speed = top_speed * (2 * cos(Θ / 2) – 1); right_speed = top_speed * (1 – 2 * sin(Θ / 2)); else left_speed = top_speed * (1 – 2 * sin(Θ / 2)); right_speed = top_speed * (-2 * cos(Θ / 2) – 1); The responses generated by these equations were in line with the expected response for any given joystick movement. The resulting graph is shown below.
Figure 10 - Graph showing how the angle of the Nunchuck joystick relates to the speed of each tread.
15
As can be seen, these equations give a logical resulting tread speed for each angle. At 0º both treads are moving forwards at top speed. At 90º they are moving in opposite directions, causing the robot to turn on the spot. At 180º they are both moving backwards at top speed. It should be noted that linear interpolation would not have been a suitable solution to this problem, as a simple linear interpolation scheme would not have allowed the robot to rotate on the spot, as both treads would be stationary at 90º and 270º (this can be seen by looking at the green line on the above graph, which is equivalent to the linearly interpolated solution). Thus, trigonometric equations were found to be preferable in mimicking a skid-steered robot.
3.3 HUD Design USARSim allows developers to query data from components placed on the robot models. These components include sonar, laser scanners, odometers, infrared sensors and CO2 sensors. These are all simulated objects which could, in real life, be placed onto a physical rescue robot to assist in its tasks. For this project we utilised the laser-scanner, odometer and camera components. The heads up display (HUD) was placed at the bottom of the user’s screen to allow them to quickly access information about the state of robot and its environment.
3.3.1
Laser Scanner
An initial attempt was made to use the sonar object provided by USARSim, but it was found to be extremely hard to read as only 8 beams are emitted. This causes its readings to be extremely difficult to interpret as not much information is included in them. The laser scanner was found to be a far better choice for two reasons. Firstly, it emits one hundred and eighty beams in an arc at the front of the robot, leading to far more readable scans. Secondly, the physical robot this system is being designed for has the exact laser scanner provided by USARSim, which will make any later transition from simulator to physical robot much simpler and more faithful.
16
Querying the laser scanner returns an array of numbers, each representing the distance travelled by a beam before colliding with an object. This allows us to depict the area in front of the robot by reproducing these beams. The longer the lines representing the beams, the further away objects are (see Figure 11).
3.3.2
Camera Pan/Tilt
The camera’s pan and tilt are queried from USARSim and depicted by drawing the tilt as an elevation arc and overlaying the horizontal direction of the camera over the laser scanner. The “Camera Tilt” display shows a line representing the vertical displacement of the camera from the rest position. The blue arc in the “Laser Scanner” display represents the current horizontal field of view of the camera, whilst the red lines represent the beams emitted from the laser scanner.
Figure 11 - Camera Tilt and Laser Scanner displays from HUD
3.3.3
GPS
Using the odometer it was possible to simulate a GPS system, which a rescue robot could be fitted with. This would allow users to efficiently navigate the robot to designated areas by using the coordinates. The GPS system displays the current coordinates of the robot, as well as the coordinates of the next waypoint (the intended destination). A compass is also displayed, showing the current heading of the robot.
17
Whilst the GPS system in this project is very basic and unrealistic (the North-South range on a normal map is from 10ºN to 10ºS) it does serve as a proof of concept as it could prove highly beneficial to real life rescue missions. The GPS was used in the second round of experiments to test the effect of delay on navigational tasks.
Figure 12 - GPS and compass display from HUD
18
4. Experimentation 4.1 Design 4.1.1
Introduction
The experiments were carried out in two sessions, making use of twelve and sixteen participants, respectively. The aim of the experiments was to test the effect of latency on the ability of the user to control different aspects of the rescue robot. The aspects chosen are specific to typical tasks rescue robots will need to carry out. Each session tested different effects of latency. The first session was designed to determine the effect of latency on the ability of the user to -
Locate victims
-
Avoid collisions
-
Avoid flipping over
The second session aimed to determine the effect of latency on the ability of the user to visit waypoints located around a map.
4.1.2
Hypothesis
For the first session of experiments the hypothesis was that a user is better able to locate victims and avoid collisions when there is no delay, and less able when there is variable latency. Furthermore, a user’s performance, under constant latency, is inversely proportional to the amount of latency present in the system. The hypothesis of the second session of experiments was that a user is better able to locate visit waypoints when there is no delay, and less able when there is variable latency. Furthermore, the user’s performance, under constant latency, would be inversely proportional to the amount of latency present in the system. 19
4.1.3
Tasks
In the first session the user was given control of the robot in the artificial environment. They were instructed to find eight pool balls hidden around the environment as quickly as possible. The pool balls represent victims. They were used instead of people as they pose less likelihood of upsetting participants. The round terminated either when they had found all eight pool balls or five minutes had elapsed. To signal that a ball had been spotted, the user would call out either the colour of the ball or the number on the ball. The experiment supervisor then checked on his monitor that this was correct. If it was indeed a correct find, it would be logged and the user could continue. In addition, the user was instructed to avoid collisions as the real-life robot could be damaged in such cases. Six different maps were designed using Unreal Editor. These were based on the Red Arena from RoboCup Rescue. This arena is made up of a single level with rubble piles and collapsed walls leaning on each other [19]. It was chosen based on its small size relative to the other maps, and relatively few obstacles. Each map was the same size, and contained the same number of obstacles. The eight balls were hidden in different locations on each map. The map choices were randomised in order to ensure greater validity and to guard against learning effects.
Figure 13 - The Red Arena from RoboCup Rescue [19]
20
In the second session the user was given control of the robot in the artificial environment. They were instructed to visit each waypoint, in order, as displayed in the GPS at the bottom of the screen. When they had found all five waypoints, or four minutes had elapsed, the round was terminated. In order to visit a waypoint, the user would navigate the robot to that point on the map (indicated by both a GPS coordinate on their HUD and a raised mound in the map). When they were within a certain radius of the waypoint a chime would sound signaling that the waypoint had been found, and a new waypoint would be displayed in the HUD. The event would be logged automatically by the system. Four maps were designed using Unreal Editor. These were four times larger than the maps used in the first session, as this session aimed to test navigational skills rather than collision avoidance or maneuvering. Each map had the same number of obstacles. The waypoints were placed at different points in each map in such a manner that the total distance travelled (the distance from the start point to each waypoint in turn) was equal for all maps. 4.1.4
Variables
There are three variables of interest in this study: speed, control and navigation. Speed is defined as how quickly the user can move the robot through the environment. In the first session it is calculated as the mean time taken to find each pool ball:
In the second session it is calculated as the mean time taken to find each waypoint:
Control and navigation are defined as the degree to which the user can control the robot, move it where he/she wants, and only where he/she wants.
21
In the first session the control is calculated by the number of collisions per minute, as well as the number of times per minute that the robot is “inverted”, or flipped upside down:
In the second session the navigational ability is calculated as the ratio of the distance travelled by the user to the optimum distance to visit the waypoints in the map:
4.1.5
Materials
The experiment was conducted in a closed room. A laptop was connected to the head mounted display and the laptop’s display was cloned to allow the experiment supervisor to watch on a different monitor. A second laptop was used to record data such as the locating of balls and collisions. One participant took part at a time. The laptop running the simulator had the following specifications: Processor: T2400 1.83GHz Memory: 3GB Ram HDD: 250GB disk Graphics: X1600 Mobility Radeon 256MB. The head mounted display used has two independent LCD screens, each capable of displaying at a resolution of 800x600.
22
All commands issued by the user were logged by the simulator, as well as the position and rotation of the robot at each time tick (every 200ms). In addition, in the first session, the time of each ball-location was logged as well as the time of each collision. In the second session the time of each waypoint visit was logged. 4.1.6
Experimental Schedule
Each participant took part in 4 rounds. The rounds were as follows: 1. No delay 2. Variable delay – A random delay, normally distributed around a mean of 80ms with a standard deviation of 40ms. 3. Constant delay 1 – A constant delay of 70ms. 4. Constant delay 2 – A constant delay of 140ms. It could be argued that the learning effect would have an impact on the results of the experiments since every participant attempted every round. It is true that a better option would have been to have every participant participate in only one round. This however, was not feasible. To have a single person participate in a single round would have required one hundred and twenty volunteers, each of whom needed to be trained, briefed and compensated. As such, the decision was made to have each participant attempt every round and, in order to offset the learning effect, the ordering of the rounds, as well as the maps on which they were run, was randomised. 4.1.7
Participants
Participants were mainly sourced from the Computer Science department by advertising on the respective websites as well as in the laboratories. Fourteen students participated in the first round and sixteen students participated in the second session. There were twenty five males and five females.
23
4.1.8
Ethical Considerations
Users using a head-mounted display for the first time can sometimes experience feelings such as nausea or dizziness. In addition, the introduction of discontinuities between action and observed reaction (the effect of lag) can cause “car-sickness”, nausea and light-headedness. Approval from the UCT Ethics Committee was sought to allow the experiments to proceed safely. To combat these effects it was made clear to users in the pre-experiment briefing that they were allowed to stop the experiment at any time, especially if they were not feeling well. In addition, users were made aware of these potential ill-effects by way of a consent form which they were asked to sign before the experiment. One participant out of the thirty experienced nausea, and was allowed to stop immediately and left as soon as they felt better.
4.1.9
Procedure
4.1.9.1 Instruction/Sandbox Stage The user was given a sheet of instructions detailing how to control the robot. They were then placed in a “sandbox” environment for 5 minutes in order to familiarise themselves with the controls. 4.1.9.2 Rounds The users were then given a second set of instructions, detailing the objectives of the rounds. The first round then started. In the first session the round ended once all the balls had been found, or five minutes had elapsed. In the second session the round ended once all waypoints had been visited, or four minutes had elapsed.
24
Each user was given a questionnaire to gauge their perceived presence in the environment as well as the difficulty of controlling the robot. This questionnaire can be found in Appendix 7.3. The second round then started. This continued until all rounds were completed. On completion of the experiment each volunteer was paid R20 for participating.
4.2 Simulation of delay Two types of delay were introduced into the system, constant and variable. They were introduced between the user issuing a command and the command being sent to the simulator. Delay was simply implemented using a busy-wait loop, where the system would loop for a set period of time (doing nothing constructive in that time) and then carrying on. The constant delay periods tested were seventy milliseconds and one hundred and forty milliseconds. The variable delay was simulated using a normally distribution, with a mean of eighty milliseconds and a standard deviation of forty milliseconds.
4.3 Map design Maps were designed using Unreal Editor. Three types of maps were needed for the experiments. The first round of experiments dealt with locating victims and avoiding collisions. To avoid distressing users, bodies were not used as victims, but rather a more harmless object, a pool ball, was used. Eight pool balls were hidden around a map based loosely on the Red Arena from RoboCup Rescue. Numerous obstacles were also placed around the maps for users to avoid. The aim was to find the pool balls as quickly as possible whilst avoiding collisions. To ensure validity, each map was the same size and had the same number of each type of obstacle located in it. The balls were hidden in similar, but different locations in each map. A pilot experiment revealed that simply hiding balls under tables was not sufficient, as users quickly learned to look in these obvious places. Therefore, more hiding places were added, which had the double benefit of not only 25
increasing the number of possible hiding places, but also increasing the number of obstacles the users had to avoid.
Figure 14 - Top left: Map from Round 1. Top Centre: Map from Round 2. Top Right: Sandbox Map
The second round of experiments dealt with navigation. Users were placed in a far larger map than the first round, and instructed to visit the waypoints displayed on the GPS unit on their HUD. There were five waypoints on each map. To ensure validity the total distance from the starting point to each waypoint, in order, taking obstacles into account, was the same for each map. The third type of map was a sandbox which would be used to allow users to familiarise themselves with the controls and with the goals of the experiments. This was done by placing the robot on a track with the eight pool balls, two waypoints and various obstacles found in the maps. The user could drive around the track, spotting pool balls and visiting waypoints.
26
4.4 Results and Analysis Analysis of variance (ANOVA) was used to test the hypotheses. This was chosen because the independent variable (Experiment type) is nominal and the dependant variables are continuous (or interval). In addition, it is not known whether the responses (dependant variables) are normally distributed. The ANOVA technique is known to be fairly robust to small deviations from normality and, according to Howell [17], may still be used with non-normally distributed input as long as the input is not significantly peaked or skewed. Data’s “peakedness” is measured using a metric called a kurtosis value. Therefore, each response was first tested for normality using a Shapiro-Wilkes test [2]. If it failed this test its kurtosis and skew were then tested. If the skewness and/or kurtosis were too high a logarithmic transformation was performed on the variable and the Shapiro-Wilkes test was then conducted on the new transformed variable [36]. This is done to transform the nonlinear relationship into a linear one. If it still failed this test, then skewness and kurtosis of the new transformed variable were checked to ensure that they were now lower. All data that failed on the first tests passed when logarithmically transformed. Osborne et al. [28] notes that outlier removal is a controversial topic and that there is no real consensus in the literature as to how to identify and deal with outliers. He does, however, give guidelines for these procedures, which have been adopted in this study. Thus, outliers will be defined as those data points lying more than three standard deviations from the mean, and will be removed before analysis. All statistical queries were carried out using R, a free software environment for statistical computing [18]. The procedure for each response is as follows: 1. Outliers are removed. 2. The data set is checked for normality, peakedness and skew.
27
3. If it fails, it is transformed using a logarithmic transformation. 4. ANOVA is then carried out on the data. 5. The results of the ANOVA will tell us whether the experiment type has a significant effect on the response. 6. Conclusions can then be drawn from these results. The results for each response will be discussed separately. 4.4.1
The effect of latency on locating victims
whisker plot is used to graphically indicate the data, and the
80
than three standard deviations from the mean. A box and
60
may unnaturally skew the results. One data point lies more
100
Before analysing the data, outliers must be removed as these
40
outlier can be clearly seen. The subject’s results are therefore poor eyesight (corrected normally by glasses, which were
20
removed from the dataset. This participant had particularly unusable when wearing the head-mounted display), which may have caused the difficulty in finding the balls.
Figure 15 - Box and whisker plot showing outlier
The table below shows that the type of experiment does not have a significant effect on the time taken to find victims. This would suggest that neither the length nor type of delay affects the user’s ability to locate victims using the rescue robot. A possible reason for this is that users tended to only look for victims when stationary. They would drive a certain distance, then stop and look around, usually rotating the robot on the spot. This would minimise the effect of latency on this metric and latency has no effect when the robot is not moving.
Experiment Type
Degrees of Sum of
Mean
freedom
Squares
Square
3
186.3
62.1
28
F value Pr(>F)
0.3018
0.823883
4.4.2
The effect of delay on collision avoidance
An ANOVA is once again used to analyse the data. The output is shown below:
Experiment Type
Degrees of Sum of
Mean
freedom
Squares
Square
3
1.5269
0.5090
F value
Pr(>F)
3.4044
0.02784
This shows that the type of experiment has a significant effect on the results at the 5% level.
Figure 16 - Graph showing average seconds per collision, with standard deviations
The graph above shows the average number of seconds per collision for each experiment type. It is clear from this that an increase in delay suggests a negative impact on the user’s ability to avoid collisions. Worth noting is that, whilst it was expected that the variable delay would prove the most challenging with regard to collision avoidance, this was not the case. What can be seen from the graph is that the standard deviation for the variable delay experiments is far higher than that of the constant delay experiments. This suggests that variable delay has a far wider spectrum of impact on users, whilst constant delay appears to have a far narrower spectrum of impact.
29
4.4.3
The effect of delay on flipping avoidance
The ANOVA conducted shows that there is not a significant relationship between the type of experiment used and the results.
Experiment Type
Degrees of Sum of
Mean
F value
freedom
Squares
Square
3
0.000160
0.000053 1.1870
Pr(>F)
0.328381
This would suggest that the delay has little or no effect on the user’s ability to avoid flipping over. This could, however, simply be due to the fact there were few flips throughout the course of the experiments, and that, sometimes, a user would flip repeatedly whilst trying to go through a door or around a sharp corner. These observations would suggest that this result should be further examined, but time constraints prevented this line of research from being pursued in this study. 4.4.4
The effect of delay on waypoint navigation
map and giving up, thereby causing excessively large scores.
100
150
These were both removed from the data set.
200
whisker plot. They were caused by users getting lost in the
250
Two outliers were found, as can be seen on the box and
significant at the 5% level. It would appear that the increase
50
The ANOVA test reveals that the type of experiment is in delay hampered the user’s attempts to travel to a way- Figure 17 - Box and whisker plot point.
Experiment Type
showing outliers
Degrees of Sum of
Mean
freedom
Squares
Square
3
1.0703
0.3568
30
F value
Pr(>F)
3.0688
0.03780
Figure 18 - Graph showing average seconds per waypoint, with standard deviations
Once again, though it would appear that the variability of the delay had no effect, but rather it was the magnitude of the delay that was important. The standard deviation is fairly large across all experiments involving delay, indicating the wide range of abilities demonstrated by different users.
10
The effect of delay on efficient navigation
8
4.4.5
The same two results from the last test were found to be
type has a very significant affect on the user’s ability to
2
As can be seen from the ANOVA results, the experiment
4
6
outliers again and were removed.
travel the optimum path to each waypoint. Figure 19 - Box and whisker plot showing outliers
Experiment type
Degrees of Sum of
Mean
freedom
Squares
Square
3
1.24132
0.41377
31
F value
Pr(>F)
6.3524
0.001131
The ratio of distance travelled to optimum distance increases as constant delay increases. This would be due to the fact that, under higher latencies, users would often be unable to compensate for the delay. Therefore, issuing a command to stop the robot would only be received by the robot after a certain delay, leading to unnecesary travelling. Once again, however, variable delay has not had the expected effect on distance travelled.
Figure 20 - Graph showing average distance ratios, with standard deviations
32
5. Conclusion and Future Work 5.1 Summary of work This study has succeeded in facilitating the communication of Nintendo Wii gaming console components and a rescue robot simulator. In addition, it used the WiiRobot system to investigate the effect of latency on the ability of users to locate victims, avoid collisions and flipping over and navigate to waypoints quickly and efficiently. It was found that latency had a significant effect on collision avoidance and navigation, both in terms of time taken and distance travelled. It was also found that the variability of the latency has no significant effect on users; rather it is the magnitude of the delay that has the most significant effect.
5.2 Impact of results The findings of these experiments should be seen, on the whole, as good news for the rescue robot community. The fact that delay does not seem to have a significant impact on the ability of users to find objects (survivors or bodies, in the case of most rescue robots) is one of the most important findings of this study, as it is the one that has the most bearing on the ability of the rescue robots to perform their core function: to save lives. It was found that delay did have an effect on the ability of users to avoid collisions. The implications of this are that robots likely to be used in circumstances which involve inherently high latencies should be designed more robustly than those which do not experience such latencies. This will ensure that they are not badly damaged by collisions with obstacles in their environment. Alternately, a certain level of autonomy could be built into the robot, thereby allowing it to recognise that there is an obstacle in its path and make the necessary steps to avoid it, either stopping or turning to one side. The fact that delay has a significant effect on the ability of users to navigate to waypoints, both in terms of speed and distance travelled, suggests that, should a rescue robot be expected to fulfill this function, it will have to take into consideration the extra time
33
and distance inherently incurred when delay is introduced. This could involve taking steps such as increasing battery life, should the effect of the delay be substantial. The most surprising discovery of this experiment was that variable delay did not have a greater impact than constant delay. Intuitively, one would assume that it would affect users more if the delay is not constant, as they would be unable to predict and compensate. This does not appear to be the case, as results showed that performance decreased as the magnitude of the delay increased, but did not decrease by a greater margin when the delay was variable. It should be noted that in all three experiments with significant results, the means of the variable delay (with mean 80ms) and the constant delay of 70ms have been very similar (less than 9% in each case). This would suggest that it is simply the magnitude of the delay which has greatest effect on the user, and not the variability. Due to the wide range of environments that recue robots are expected to operate in, this should be seen as a very positive result. Certain situations would naturally involve variable delay, and it has been shown that this will not have any greater impact than if the delay was constant.
5.3 Future work Further work could be carried out to determine whether higher standard deviations of variable delay have different effects on users. In addition, other researchers have experimented with much higher delay (of the order of seconds, not milliseconds) so it would be interesting to test their results on the WiiRobot system. Moving away from the effect of delay, there are many other factors that could tested using the WiiRobot system. These include different acceleration functions (how fast the robot accelerates relative to the distance the Nunchuck joystick is pushed forward), the effect of HUD elements such as the laser scanner, and possible changes to the laser scanner to make it more understandable. Also of interest would be the implementation of a mapping function whereby the laser scanner is used to map its environment for later evaluation.
34
6. References [1] Balakirsky, S., Scrapper, C., Carpin, S. and Lewis, M. UsarSim: providing a framework for multirobot performance evaluation. In Proceedings of PerMIS. (Gaithersburg, MD), 2006, 98. [2] Becker, L. A. Assumption testing for ANOVA. 2008, October (1999). [3] Beigbeder, T., Coughlan, R., Lusher, C., Plunkett, J., Agu, E. and Claypool, M. The effects of loss and latency on user performance in unreal tournament 2003®. In Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games. (University of California, San Diego). ACM New York, NY, USA, 2004, 144-151. [4] Casper, J. and Murphy, R. Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center. Systems, Man and Cybernetics, Part B, IEEE Transactions on, 33, 3 (2003), 367-385. [5] Chen, J. Y. C., Haas, E. C. and Barnes, M. J. Human Performance Issues and User Interface Design for Teleoperated Robots. Systems, Man and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 37, 6 (2007), 1231-1245. [6] Chen, K. T., Huang, P. and Lei, C. L. How sensitive are online gamers to network quality? Commun ACM, 49, 11 (2006), 34-38. [7] Claypool, M. and Claypool, K. Latency and player actions in online games. Commun ACM, 49, 11 (2006), 40-45. [8] Dick, M., Wellnitz, O. and Wolf, L. Analysis of factors affecting players' performance and perception in multiplayer games. In Proceedings of 4th ACM SIGCOMM workshop on Network and system support for games. (Hawthorne, New York). ACM New York, NY, USA, 2005, 1-7. [9] Draper, J. V., Kaber, D. B. and Usher, J. M. Speculations on the Value of Telepresence. CYBERPSYCHOLOGY AND BEHAVIOR, 2(1999), 349-362. [10] EpicGames, Epic Games Home. 2008, 10/20 (10/07 2008). [11] Ferrell, W. R. and Sheridan, T. B. Remote manipulative control with transmission delay. IEEE Transactions on Human Factors in Electronics, 4(1963), 25-29.
35
[12] Filippi, H. Wireless teleoperation of robotic arms. MSc Technology Thesis, Institutionen för rymdvetenskap, Kiruna. [13] Gams, A., Mudry, P. A. and de Lausanne, E. P. F. Gaming controllers for research robots: controlling a humanoid robot using a Wiimote, 2008. [14] Goza, S. M., Ambrose, R. O., Diftler, M. A. and Spain, I. M. Telepresence control of the NASA/DARPA robonaut on a mobility platform. In Proceedings of the SIGCHI conference on Human factors in computing systems. (Vienna, Austria). ACM New York, NY, USA, 2004, 623-629. [15] Guilhem, D. WiiuseJ - Java API for Wiimotes. 2008, 28/10 (29/9 2008). [16] Guo, C. and Sharlin, E. Exploring the use of tangible user interfaces for humanrobot interaction: a comparative study. (2008). [17] Howell, D. C. Statistical methods for psychology. Duxbury Press Belmont, California, 1987. [18] Ihaka, R. and Gentleman, R. R: A Language for Data Analysis and Graphics. Journal of Computational and Graphical Statistics, 5(1996), 299-314. [19] Jacoff, A., Messina, E. and Evans, J. Experiences in deploying test arenas for autonomous mobile robots. NIST Special Publication, (2002), 87-94. [20] Kaber, D. B., Riley, J. M., Zhou, R. and Draper, J. V. Effects of Visual Interface Design and Control Mode and Latency on Performance, Telepresence, and Workload in a Teleoperation Task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. (San Diego, California), 2000, 503-506. [21] Kay, J. S. and Thorpe, C. E. Operator Interface Design Issues in a Low-Bandwidth and High-Latency Vehicle Teleoperation System. SAE Transactions, 104(1996), 487493. [22] Laforest, M. Wiiuse, The Wiimote C Library. 2008, 20/10 (10/10 2008). [23] Lane, J. Effects of time delay on telerobotic control of neutral buoyancy vehicles. In Proceedings- IEEE International Conference on Robotics and Automation. (Washington, DC), 2002, 2874-2879.
36
[24] Lathan, C. E. and Tracey, M. The Effects of Operator Spatial Perception and Sensory Feedback on Human-Robot Teleoperation Performance. Presence: Teleoperators & Virtual Environments, 11, 4 (2002), 368-377. [25] Lee, J. C. Hacking the Nintendo Wii Remote. Pervasive Computing, IEEE, 7, 3 (2008), 39-45. [26] Luck, J. P., McDermott, P. L., Allender, L. and Russell, D. C. An investigation of real world control of robotic assets under communication latency. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. (Salt Lake City, Utah). ACM New York, NY, USA, , 2006, 202-209. [27] Muzzetta, A. and Cristaldi, M. Communication issues in distributed multisensory robotics, 2007. [28] Osborne, J. W. and Overbay, A. The power of outliers (and why researchers should always check for them). Practical Assessment, Research & Evaluation, 9, 6 (2004), 112. [29] Pantel, L. and Wolf, L. C. On the impact of delay on real-time multiplayer games. In Proceedings of the 12th international workshop on Network and operating systems support for digital audio and video. (Miami Beach, Florida). ACM New York, NY, USA, 2002, 23-29. [30] Pantel, L. and Wolf, L. C. On the suitability of dead reckoning schemes for games. In Proceedings of the 1st workshop on Network and system support for games. (Miami Beach, Florida). ACM Press New York, NY, USA, , 2002, 79-84. [31] Quax, P., Monsieurs, P., Lamotte, W., De Vleeschauwer, D. and Degrande, N. Objective and subjective evaluation of the influence of small amounts of delay and jitter on a recent first person shooter game. In Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games. (University of California, San Diego). ACM New York, NY, USA, 2004, 152-156. [32] Schou, T. and Gardner, H. J. A Wii remote, a game engine, five sensor bars and a virtual reality theatre. In Proceedings of the 2007 conference of the computer-human interaction special interest group (CHISIG) of Australia on Computer-human interaction: design: activities, artifacts and environments. (University of South Australia). ACM New York, NY, USA, 2007, 231-234.
37
[33] Shirai, A., Geslin, E. and Richir, S. WiiMedia: motion analysis methods and applications using a consumer video game controller. In Proceedings of the 2007 ACM SIGGRAPH symposium on Video games. (San Diego, California). ACM Press New York, NY, USA, 2007, 133-140. [34] Sreedharan, S., Zurita, E. S. and Plimmer, B. 3D input for 3D worlds. In Proceedings of the 2007 conference of the computer-human interaction special interest group (CHISIG) of Australia on Computer-human interaction: design: activities, artifacts and environments. (University of South Australia). ACM New York, NY, USA, 2007, 227230. [35] Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A. and Goodrich, M. Common metrics for human-robot interaction. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. (Salt Lake City, Utah). ACM New York, NY, USA, 2006, 33-40. [36] Underhill, L. G. and Bradfield, D. Introstat. Juta & Co., 1981. [37] UnrealTechnology. Unreal Engine Technology. 2008, 10/20 (10/07 2008).
38
7. Appendix 7.1 Waiver Experiment Title: The evaluation of the Nintendo Wiimote and Nunchuk for teleoperation in urban search and rescue environments. Purpose of the research study: The purpose of this study is to determine the suitability of the Nintendo Wiimote and Nunchuk as control mechanism and means of head tracking for robotic control.
What you will be asked to do in this study: Volunteer participation in this research project will take place at the Experiment Room in the Computer Science Building. Following a brief informal briefing about the simulator, you will be given an opportunity for a 10 minute test drive of the robot within the simulator so as to become familiar with the controls and get acclimated to the virtual environment. After a short rest period, you will be asked to perform a number of tasks and after completing each task you will be given a short questionnaire. Time Required: Approximately 50 minutes
Risks: There is a small risk of subjects developing what is ordinarily referred to a simulator sickness. It occurs infrequently to subjects who are exposed to prolonged continuous testing in simulated environments. Symptoms consist of nausea and a feeling of being light headed. The risk is minimised as a result of the short duration of each session in the simulator.
Five-minute breaks will be given at intervals if needed.
Potential side effects of virtual environment (VE) use include stomach discomfort, headaches, sleepiness, and mild degradation of postural stability. However, these risks are no greater than the sickness risks participants may be exposed to if they were to visit an amusement park such as Ratanga Junction or any such amusement park with attractions such as roller coasters.
39
Benefits/Compensation: There is no direct benefit to you from participation in this study. All volunteers will receive R20 for time and effort in completing this study. Privacy: Your identity will be kept confidential. Your name will not be used in any report.
Voluntary participation: Your participation in this study is voluntary. You have the right to withdraw from this study at any time without consequence.
More information: For more information or if you have questions about this study, contact Jason Brownbridge
Graeme Smith
[email protected]
[email protected]
□ I have read the procedure described above □ I voluntarily agree to participate in the procedure □ I am at least 18 years of age or older
Participant
Date
40
7.2 Pre-experiment Questionnaire
Subject Number:
___________________________________________________
Age
___________________________________________________
Gender
Male / Female
Previous
Robotic Yes / No
Control Experience: Previous Computing Yes / No Experience: Corrected Vision:
Yes / No
Hearing Problems:
Yes / No
Colour Blind:
Yes / No
For the following questions, please circle the number which best represents your experience. 1. On average how much time do you spend using a computer per week? 1
2
3
4
No time
5
6
7
Almost all my time
41
2. On average how much time do you spend gaming, per week? 1
2
3
4
5
6
No time
7
Almost all my time
3. What level of previous virtual reality experience have you had? 1 None
2
3
4
5
6
Used a head
7 Full
virtual
mounted display
reality experience
42
7.3 Post-experiment Questionnaire
Experiment
___________________________________________________________
Number: Subject
___________________________________________________________
Number: Corrected
Yes / No
Vision: Colour
Yes / No
Blind: For the following questions, please circle the number which best represents your experience. 1. Please rate your sense of being in the virtual environment, on a scale of 1 to 7, where 7 represents your normal experience of being in a place. I had a sense of “being there” in the virtual environment: 1
2
3
4
5
6
7
Not at all
Very much
2. To what extent were there times during the experience when the virtual environment was the reality for you? There were times during the experience when the virtual environment was the reality for me… 1
2
3
4
At no time
5
6
7
Almost all the time
43
3. When you think back to the experience, do you think of the virtual environment more as images that you saw or more as somewhere that you visited? The virtual environment seems to me to be more like… 1
2
3
4
5
6
Images that I saw
7 Somewhere that I visited
4. How difficult was it to control the robot under these conditions? 1
2
3
4
5
6
7
Easy
Impossible
5. During the time of the experience, which was the strongest on the whole, your sense of being in the virtual environment or of being elsewhere? I had a stronger sense of… 1
2
3
4
Being elsewhere
5
6
7 Being in the virtual environment
6. Consider your memory of being in the virtual environment. How similar in terms of the structure of the memory is this to the structure of the memory of other places you have been today? By ‘structure of the memory’ consider things like the extent to which you have a visual memory of the virtual environment, whether that memory is in colour, the extent to which the memory seems vivid or realistic, its size, location in your imagination, the extent to which it is panoramic in your imagination, and other such structural elements. 44
I think of the virtual environment as a place in a way similar to other places that I’ve been today… 1
2
3
4
5
6
Not at all
7 Very much so
7. During the time of your experience, did you often think to yourself that you were actually in the virtual environment? During the experience I often thought that I was really standing in the virtual environment… 1
2
3
4
Not very often
5
6
7 Very much so
Any Comments: _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ 45
7.4 Screenshots of WiiRobot system
7.5
46
Instructions given to participants
Sandbox Instructions Intro This is the Sandbox, where you get to learn how to drive the robot. Hold the Nunchuk in whichever hand feels more comfortable. Hold the Wiimote in the other hand. To drive the robot, press the joystick on top of the Nunchuk in the direction you wish to go. You can turn the camera independently of the robot by holding down the “Z” button on the Nunchuk and using the joystick to move the camera. Use the laser scanner/pan display and tilt display to see where the camera is pointing in relation to the robot. To re-align the camera with the robot, press the “C” button on the Nunchuk. The robot can tip over if you drive into walls. Avoid doing this as a real robot would be severely damaged by collisions or tipping over. If you do tip over, press the “Home” button on the Wiimote to right yourself. The Wiimote will vibrate when you collide with something. Observe the laser scanner, which allows you to judge, in more complex environments, where obstacles are and assists you in steering around them. This works by sending out 180 beams in an arc around the front of the robot and recording how far they travel before they hit something. So if you see red on the laser display, this means that there is nothing in front of you. Overlayed on the laser-scanner display is the pan display. This shows the direction the camera is pointing in relation to the robot. The tilt display, to the left of this, will show, when the camera is mobile, the tilt of the camera. The GPS system displays your current coordinates as well as the coordinates of your next waypoint. Travel towards the waypoints. When you get close enough, the system will log that you have located the waypoint and a sound will play. A new waypoint will display.
47
Aim Drive around the track. You will do two types of experiments today. This sandbox aims to teach the basics of both. The first type requires you to find pool balls hidden around a map. When you see a pool ball, say (aloud) “Located Ball
” or “Located ball”. This will allow us to log your finding of the balls. The second type of experiment requires you to locate waypoints using the GPS system. Use the GPS to drive to each waypoint as it is shown. When you reach the waypoint a chime will play. Move to the next waypoint. You have 5 minutes to familiarise yourself with the robot, have fun.
Figure 21 - The pool balls 1 through 8 will be hidden in each map.
48
49
Instructions for GPS Intro
Driving the robot is the same as in the Sandbox but now you cannot turn the camera independently of the robot. Aim
The aim of this exercise is to locate the waypoints displayed by the GPS system in as short a time as possible. Travel to each waypoint as it is displayed. You will know you have located a waypoint when you hear a chime play and a new waypoint is displayed. As before, avoid collisions or tipping over whenever possible.
50
Instructions for pool ball location Intro
Driving the robot is the same as in the Sandbox but now you cannot turn the camera independently of the robot. Aim
The aim of this exercise is to find as many pool balls as you can in 5 minutes. As before, you must speak aloud the pool balls number or colour when you locate it. As before, avoid collisions or tipping over whenever possible.
51