Preview only show first 10 pages with watermark. For full document please download

1 Garvin Thesis Opening Pages May 30 Corrected Final

   EMBED


Share

Transcript

Instrument Design as a Means to Enhancing Expressivity Robert C. Garvin IV Submitted in Partial Fulfillment of the Requirements For the Degree of Master of Fine Arts in Sound Design at The Savannah College of Art and Design © May 2012, Robert Calvin Garvin IV Signature of Author and Date____________________________________________________ ____________________________________________________________________/___/___ Matt Akers Date Committee Chair ___________________________________________________________________/___/___ Robert Miller Date Committee Member 1 ___________________________________________________________________/___/___ Andre Ruschkowski Date Committee Member 2 Instrument Design as a Means to Enhancing Expressivity A Thesis Submitted to the Faculty of the Sound Design Department in Partial Fulfillment of the Requirements for the Degree of Master of Fine Arts Savannah College of Art and Design By Robert Calvin Garvin IV Savannah, GA May 2012 Dedication: This paper is dedicated to my family, Bob, Beth, Amanda, and Kathryn, my friends, roommates, and classmates, campus ministers, pastors, and church families, and most of all, Christ Jesus. Without all the encouragement and support from these people, I would not have made it to where I am today. Thank you. Acknowledgements: I would like to firstly thank Matthew Akers, Robert Miller and Andre Ruschkowski for serving on my thesis committee, and for all the wisdom that they have imparted to me both during my studies at SCAD, as well as in the thesis completion process. I would also like to thank David Stivers for his advice and lectures in his class, Writing the Graduate Thesis. I would also like to thank the entire faculty of the sound design department at the Savannah College of Art and Design. Through their education, my understanding and love for sound design and the technology used in this wonderful art form has increased tenfold. Finally, I would like to thank the faculty and family of professors in the Music department of LaGrange College. There, my love for music and technology was fostered, and the education I received there has served as a solid foundation, upon which my graduate studies have been built. To all these people who have graciously donated their time and energy over the years, Thank you. Table of Contents: Figure List……………………………………………... Page 1 Abstract………………………………………………... Page 2 Introduction………………………………………………….. Page 3 Human Computer Interaction………………………………... Page 8 Affordances and Constraints…………………………... Page 10 Gestural Mapping……………………………………… Page 12 Feedback Strategies……………………………………. Page 16 Summary………………………………………………. Page 18 Instrument Review…………………………………………... Page 20 Monome……………………………………………….. Page 20 Software Features……………………………….. Page 23 Eigenharp……………………………………………… Page 23 Summary………………………………………………. Page 26 System Proposal……………………………………………... Page 29 Gestural Control and Mapping………………………… Page 30 Application Control Methods…………………………. Page 31 Possible Applications and Compositional Methods…… Page 32 Feedback………………………………………………. Page 34 Summary………………………………………………. Page 35 Closing Discussion…………………………………………... Page 36 Bibliography…………………………………………………. Page 38 Garvin Figure List Figure 1 – Monome 64…………………………………………………………… Figure 2 – Eigenharp Alpha1…………………………………………………….. Figure 3 – Diagram of Digital Musical Interface (Magnusson) ………………… Figure 4 – Monome, exploded view……………………………………………… Figure 5 – Various Monome Applications……………………………………….. Figure 6 – Shift Functions2………………………………………………………. Figure 7 – Original Digital Chimes File………………………………………… 1 Page 5 Page 6 Page 13 Page 20 Page 21 Page 31 Page 34 Image sources, clockwse from left: - Geert Bevin, “Alpha alone,” EigenZone, Geert Bevin, last modified June 21, 2010 http://www.eigenzone.org/2010/06/21/eigenharp-alpha-review-after-three-months/. - Mike Milton with Alpha, AudioFanzine, last modified March 27, 2012, http://en.audiofanzine.com/misc-midi-controller/eigenlabs/eigenharpalpha/user_reviews/r.92581.html. - Geert Bevin, “Eigenharp Alpha Experiment 20100624” (screen capture), Vimeo.com, 2011, http://vimeo.com/12820692. 2 Image compiled from: Sweetwater Sound, 1600-Axiom49Edu10_top.jpg, last modified 2012, http://www.sweetwater.com/store/detail/Axiom49Edu10. 1 Garvin 2 Instrument Design as a Means to Enhancing Expressivity Robert Calvin Garvin IV May 2012 The goal of this thesis is to use principles of the study of human-computer interaction to evaluate specific software features of two electronic music controllers, the Eigenharp and the Monome, and to ascertain the types of musical expressivity afforded to these devices. Once these devices have been evaluated, the results of the evaluation will be collated into a set of guidelines, from which a software application that provides similar functionality for the MIDI keyboard will be developed. Garvin 3 Introduction Computers are used today to perform a wide variety of functions. In career fields around the world, including businesses, research laboratories, and designers, computers are used to aid in the tasks of many professions. Because computers today are both powerful and versatile, many musicians and composers can also use them in the development of their works, widening the possibilities of the composition. In the field of computer music, much research is prepared to develop and improve the ways that musicians interact with music software. People who work in this field range from computer programmers and instrument designers to musicians and composers. A great amount of energy is spent on developing methods of inputting control signals form various sources, including audible, visual, and gestural, and using these signals to control various parameters within the software during a live performance. Commercial software such as Avid’s Pro Tools and Propellerhead’s Reason are a few examples of software used in computer music compositions. This software is designed with a particular task in mind, and is built specifically to make the completion of that task as efficient as possible. For Pro Tools, the software is primarily designed to record, edit, and mix audio files. It is arranged in a series of tracks that provide individual controls to allow for separate processing and mixing in a simple, refined interface. Propellerhead’s Reason focuses on MIDI sequencing, though in more recent software updates, audio recording and editing has been implemented. The software is based on the “rack unit” organization system, simulating stacks instruments and effects units in a recording studio. Users can custom patch their instruments together, creating interesting and evolving sounds to use in their project. Some manufacturers also produce hardware devices that are specifically designed to make working with these applications even more efficient by incorporating frequently used Garvin 4 elements within the software into the device, giving instant physical access to the user. Faders, knobs, transport controls, and menu commands are among some of the features that these control surfaces may provide to their users, allowing control over portions of the software without having to look at the screen or navigate a mouse to select and use them. Whether these programs and hardware devices are highly efficient at recording and mixing audio, or sequencing and editing MIDI events, their specific functions allow the composer to create complex sounds and textures for release as a single track, however they are not primarily designed to assist the performance of a complex track in a live situation. Alternatively, many software applications exist that allow the computer musician to design and custom build their own software systems. Rather than using a text-based programming language, programs such as Max/MSP have a palette of graphical objects that are used to program software while also providing elements to be included in the user interface. When creating custom software, the design can sometimes become so complex that a dedicated hardware device is almost necessary in order to provide full functionality within the application. In other cases, both the hardware and the software are designed together, resulting in an entirely new digital music system. The Monome and the Eigenharp are both examples of this kind of design process. The two differ greatly in both their appearance and the techniques used to play them, yet the resulting compositions and performances share a common trait: they both provide the musician with all the tools necessary to conduct a solo presentation. These presentations, while structured and planned prior to performance, still retain the flexibility to allow for improvisation and other performance practices. Garvin 5 The Monome has a very simple, minimal design that translates into ease of use and clarity of control. The original Monome sixty-four (fig. 1) derives its name from the eight by eight grid of buttons that makes up its control surface. This translucent grid of buttons has beneath it an identical grid of sixty-four LED lights which, when controlled by the software, serve as status Figure 1 – Monome 64 – An impressive kit build of the Monome 64. indicators for the player during a performance. The device runs a number of applications, each with its own specific function. Brian Crabtree, in an effort to streamline the control of software that he had developed, built the original Monome. Eventually, a friend popularized the device through his tours and shows, and Crabtree produced a limited number for release in 2006.3 Since then, Crabtree has opened a small business, monome.org, with Kelli Cain, and has sold close to three thousand kits and hand-produced Monomes.4 While the Monome can be held in the hand, the Eigenharp (fig. 2) is a little over four feet, and requires the use of either the harness-like instrument strap or the built in adjustable floor spike to steady the instrument while playing. In 2000, John Lambert started a small company called Eigenlabs5 to begin development on a simplified solution to performing electronic music. Ten years later, the Eigenharp Alpha was released, and over the next twelve months two more models, the Tau and Pico were released, providing the market with semi-lower 3 Vlad Spears, “Monome!,” Vlad Spears – Pterodactyls, Blue and You, Vlad Spears, last modified April 23, 2006, http://vladspears.com/2006/04/monome/. 4 Monome, "orders," monome.org, accessed May 24, 2012, http://monome.org/order. 5 John Lambert, “Eigenlabs,” John Henry Lambert, John Lambert, last modified July 2009, http://johnhenrylambert.com/projects/eigenlabs.html. Garvin 6 cost alternatives. The Alpha features a main playing keyboard, a row of percussion keys, two strip controllers, and a breath controller, making for a total of five types gestures of expression on the body of the device6. Through these gestural inputs, the user is able to exercise nuanced control over the music, incorporating subtle vibrato via the unique tilt mechanism in each key, or adding effects to drum loops through the strip controllers or the breath pipe. Percussion lines can be “thumped” out on the larger percussion keys, and if switching to a different controller or other physical instrument, a mute key at the very bottom prevents any accidental key presses Figure 2 - Eigenharp Alpha – Clockwise from left: The Eigenharp Alpha, Mike Milton with Alpha, Geert Bevin with Alpha. from crashing a performance. LEDs on each key provide feedback when pressed, and indicate which of the many modes or setups are active, keeping the performer informed at all times of the state of the instrument. Both the Monome and the Eigenharp are unique digital musical instruments that provide an equally unique way of creating music, made possible by both their gestural interfaces and the accompanying software features. However, the price of each system, as well as their production process, limits many from creating music in this distinct way. The goal of this thesis is to establish a list of qualities for evaluation of these devices by using principles derived from studies in human-computer interaction, and to evaluate the musical capabilities of the Eigenharp and the Monome according to these principles. From this, it can be argued that a similar style of 6 Eigenharp Alpha product page, Eigenlabs, accessed February 15, 2012, http://www.eigenlabs.com/product/alpha/. Garvin music performance can be achieved through the proposition of an application that would map these specific parameters to the gestures of a standard MIDI keyboard controller. 7 Garvin 8 Human Computer Interaction The technology in today’s computers allows for any single personal computer to efficiently perform in almost any given situation, changing the function of the computer based on the software it is running,7 even performing a number of different tasks at once. Thus the study of computer music has also branched out into a variety of fields, studying both the use of music and sound to control lights, servo motors, and other output devices, as well as the reverse idea: using these elements to control sound. As the number of input devices increased, the importance of understanding concepts within the field of human-computer interaction (otherwise known as HCI) became more important to the field of computer music. If a computer can provide a user with a wide range of functions and processes, then the most efficient method of controlling these functions will most likely also change. By way of illustration, the best way to interface with a word processing application is to use the QWERTY keyboard. Each key is assigned (or mapped) to a specific letter, and pressing those keys will place those letters into the document. For a video game, depending on its complexity, a QWERTY keyboard and mouse may be sufficient, mapping the mouse movement and key presses to various character movements and actions. However, the gameplay of more complex video games will be much more fluid when interfaced with a gamepad or joystick type of device. Similarly, within computer music applications, the interface becomes the bridge between the user and the software program. This bridge serves three main functions, the basis of which are borrowed from the field of HCI: to provide a means of expression to the user through an appropriate gestural interface, to translate these gestures into data streams and map them to 7 Bert Bongers, “Electronic Musical Instruments: Experiences of a New Luthier,” Leonardo Music Journal 17, (2007), 9. Garvin 9 parameters within the music software, and to provide feedback to the user on the current status of the system. These three ideas provide a foundation upon which to build a set of guidelines, which can guide a musician, designer, or programmer through the process of providing a specified controller, or re-purposing an existing controller, for use in computer music applications. One of the key elements of music performance is to convey musical intent within the visual element. Much like a guitarist or pianist moves with the music in order to both create and convey expression, a musician who plays a digital musical instrument should be able to convey through their intentions the expressive qualities of the music. Unfortunately, in using the computer’s primary interface, there is no perceived link between the performer’s actions and the resulting sound,8 and they appear to be babysitting as the musical composition runs its course.9 This fails to engage the audience10 and limits the range of physical gestures available to the performer.11 Thus, a more suitable gestural interface is needed to provide the musician with first, a more expressive means of control over the music and, second, a more established visual connection which engages and informs the audience of the connection between the performer’s gestures and the musical result. 8 Garth Paine, “Towards Unified Design Guidelines for New Interfaces for Musical Expression,” Organised Sound 14, no. 2 (2009) 142. 9 W. A. Schloss, “Using Contemporary Technology in Live Performance: The Dilemma of the Performer,” Journal of New Music Research 32, no. 3 (2003): 239–242, quoted in Sile O’Modhrain, “A Framework for the Evaluation of Digital Musical Instruments,” Computer Music Journal 35, no. 1 (2011): 32. 10 Paine, “Towards Unified Design Guidelines,” 142. 11 David Wessel and Matthew Wright, “Problems and Prospects for Intimate Musical Control of Computers,” Computer Music Journal 26, no. 3 (2002): 11. Garvin 10 Affordances and Constraints The sub-topic of affordances and constraints deals with this very aspect of providing the user with the features, at both the hardware and software level, needed to efficiently and expressively control the sound. Thor Magnusson, author of “Designing Constraints: Performing with Digital Musical Systems,” argues that designing from constraints may prove to be more fitting within the context of digital musical instruments. Arguing against affordances, he states that the “highly varied interpretations and definitions” of the term “affordances” makes it difficult to use the term clearly.12 These varied interpretations range from the objective: features and functions that the object offers to the user,13 to the perceptive: features that the user perceives are being offered to it,14 to the subjective: affordances that are “carefully and simply encoded internal representations of external objects, the encodings capturing the functional significance of the object.”15 This final definition is extended to include abilities and functions learned within a social setting, allowing cultural surroundings to influence the perceived (or perhaps preferred) affordances of an instrument.16 While these are indeed somewhat varied, each definition offers a different view of the object. When designing an item, an affordance is something that a creator desires to offer or 12 Thor Magnusson, “Designing Constraints: Composing and Performing with Digital Musical Systems,” Computer Music Journal 34, no. 4 (2010): 63. 13 J. J. Gibson, The Ecological Approach to Visual Perception (Boston, Massachusetts: Houghton Mifflin, 1979), quoted in Thor Magnusson, “Designing Constraints,” 62. 14 Donald A. Norman, The Psychology of Everyday Things (New York: Basic Books, 1988), quoted in Thor Magnusson, “Designing Constraints,” 63. 15 A. H. Vera and H. A. Simon, “Situated Action: A Symbolic Interpretation,” Cognitive Science 17 (1993): 7–48, quoted in Thor Magnusson, “Designing Constraints,” 63. 16 A. Costall, “Socializing Affordances,” Theory and Psychology 5, no. 4 (1995): 467–481, quoted in Thor Magnusson, “Designing Constraints,” 63. Garvin 11 provide to the user.17 The user, not necessarily knowing the intent of the creator, perceives only a portion of these affordances.18 Finally, a group of users, either within a specific region, or connected by the web, share not only features, but also techniques and compositional strategies, forming a cultural perception of the device.19 Magnusson states that, while affordances may be easily recognized when first learning the instruments, “the constraints of a musical instrument are often not directly perceptible at the initial encounter.”20 For this reason, it would seem beneficial to design a digital music system using both affordances and constraints. One of the scholars mentioned by21 Magnusson is Donald Norman. Norman is an accomplished author in the field of HCI, and founder of the Nielsen/Norman Group, a user experience/usability-consulting firm. In Sergi Jordà’s article on user-friendly new musical instruments, Jordà quotes Norman as he discusses affordances and constraints found in a pair of scissors. Norman describes how the appearance of a pair of scissors automatically suggests that they be held in a certain way. “The holes are affordances: they allow the fingers to be inserted. The sizes of the holes provide constraints to limit the possible fingers….”22 The interface also suggests possible operations. “You can figure out the scissors because their operating parts are visible, and the implications clear. The conceptual model is made obvious, and there is effective 17 J. J. Gibson, The Ecological Approach, quoted in Thor Magnusson, “Designing Constraints,” 62. 18 D. A. Norman, The Psychology of Everyday Things, quoted in Thor Magnusson, “Designing Constraints,” 63. 19 A. Costall, “Socializing Affordances,” quoted in Thor Magnusson, “Designing Constraints,” 63. 20 Thor Magnusson, “Designing Constraints,” 64. 21 Kara Pernice, "About Don Norman - jnd.org," Don Norman's jnd.org website / humancentered design, accessed May 16, 2012, http://jnd.org/about.html. 22 D. A. Norman, The Design of Everyday Things (New York: Doubleday, 1990) pp. 12–13, quoted in Sergi Jordà, “FMOL: Toward User-Friendly, Sophisticated New Musical Instruments,” Computer Music Journal 26, no. 3 (2002): 27. Garvin 12 use of affordances and constraints.”23 This implies that a good design has a balance of affordances and constraints. The implementation of a piano style keyboard in analog synthesizer units is an excellent example of balanced affordances and constraints. The theremin was invented in 1919 by Leon Theremin,24 and had two antennas, which gave the user control over the pitch and amplitude of an oscillator by the wave of their hands. This was a difficult method of control, and very few were able to truly master the instrument. Nonetheless, it was popular enough for Robert Moog in the 1950s to make money by selling theremins with his father.25 Eventually, electronic music pioneer Raymond Scott, bought one of these theremins from Moog, and re-engineered it by replacing the pitch antenna with a keyboard, thereby creating his Clavivox26. This constrained the pitch of the theremin to the standard of western tuning. Gestural Mapping Traditional acoustic instruments were designed and built in such a way that the interface that the player interacted with, and the sound source that generates and amplifies the sound were joined together. This was necessary, as the instrument would cease to be functional if the two elements were separated. Whether the source is excited directly by a player’s hands or lips, or indirectly by a bow or a key, “these two elements are often one part and tightly coupled.”27 23 Ibid. Apple, Inc., “A Brief History of the Synthesizer,” Logic Express 9 Instruments [help file], Apple, Inc., last modified 2009, http://documentation.apple.com/en/logicexpress/instruments/index.html#chapter=A%26section= 5%26tasks=true. 25 Robert Moog, "Memories of Raymond Scott," Raymond Scott Archives, Jeff E. Winner, last modified 2012, http://raymondscott.com/#293/custom_plain. 26 Ibid. 27 Bert Bongers, “Electronic Musical Instruments: Experiences,” 11. 24 Garvin 13 Figure 3 – Thor Magnusson - Diagram of Digital Musical Interface. This diagram shows the flow of the musician’s influence, while also showing the individual mapping scheme. Digital music instruments also have both of these elements, however they are physically separated, connected only by the data flowing between them. In Magnusson’s “Designing Constraints” article, he includes a diagram of a typical musical interface.28 In Figure 3, Magnusson delineates the digital instrument as a whole, but also indicates that the mapping and sound engines serve as the “instrumental model,” providing the greatest influence over the resulting sound.29 The musician’s influence over the sound engine is traced from the musician, through the controller, to the mapping engine, and finally to the sound source.30 The interface of the system remains in the hands of the musician, while the sound source is digital, relying on computer software and user input to generate sound. “[This] physical and logical separation of 28 Thor Magnusson, “Designing Constraints,” 66. Thor Magnusson, “Designing Constraints,” 66. 30 Ibid. 29 Garvin 14 the input device from the sound production necessitates multiple ways of processing and mapping the information coming from the input device.”31 This connection is absolutely important to the function of the instrument. Jordà again states, “the final expressiveness and richness of any musical interface (controller) cannot be independent of its generator and the mapping applied between them.”32 Mapping therefore becomes a necessary and exceedingly important part of designing the digital musical instrument, and when designing, both the interface and the sound source should be considered as one device33, paying close attention to their relationship to one another, as well as to the player. Magnusson calls mapping “a compositional process that engenders a structure of constraints;”34 i.e. limiting the performer by assigning afforded gestural features to control specific parameters. Magnusson here also uses the word “compositional,” suggesting that the composition not only lies in the music to be performed, but also the process that determines the way the player will perform the music. The process of building the instrument is part of the creative process of developing the composition.35 With this in mind, the goal of mapping when designing a new digital instrument is to assign these data streams to appropriate points of manipulation within the software. When making these decisions, there are a number of things to consider. Gesture establishes a relationship between the performer and the audience. They are “understood to communicate an authenticity about the momentary events being created,”36 and are used “both as 31 Sergi Jordà, “FMOL: Toward User-Friendly, Sophisticated New Musical Instruments,” Computer Music Journal 26, no. 3 (2002): 24. 32 Ibid. 33 Bert Bongers, “Electronic Musical Instruments: Experiences,” 11. 34 Thor Magnusson, “Designing Constraints,” 65. 35 Paine, “Towards Unified Design,” 150. 36 Ibid, 142. Garvin 15 a means to engage the production of sound on an instrument and as an expression of an inner intentionality.”37 O’Modhrain suggests that if an instrument fails to do this, then it fails as an instrument.38 This connects the significance of an appropriate mapping scheme with the gestural affordances of an instrument. David Wessel and Matthew Wright also provide a set of guidelines to take into account when mapping. In their article “Problems and Prospects for Intimate Musical Control of Computers,” they suggest that a direct relationship should exist between the size of a gesture and its resulting sound, with larger gestures producing larger results, and smaller gestures producing smaller results.39 Predictability is also important, especially when dealing with generative algorithms giving the user the feeling that they have control over the processes.40 Finally, Wessel and Wright suggest that similar sounds be placed within proximity of one another.41 They specifically note both the one-dimensional pitch layout of the traditional keyboard, as well as the two-dimensional pitch layouts of the bandoneon and accordion follow this configuration pattern.42 This concept would also apply when mapping gestures to other parameters. For instance, when mapping parameters for filter control, all the controls that are associated with the filter should all be grouped in adjacent to one another on the interface. Similarly, if these controls exist in multiple modes, it would seem logical that they be mapped to the same physical controls in each mode. Gil Weinberg and Scott Driscoll share these ideas in “Toward Robotic 37 Ibid. Sile O’Modhrain, "A Framework for the Evaluation of Digital Musical Instruments," Computer Music Journal 35, no. 1 (2011): 33. 39 David Wessel and Matthew Wright, "Problems and Prospects for Intimate Musical Control of Computers," Computer Music Journal 26, no. 3, (2002): 14. 40 When working with a system that is more responsive, rather than employing generative algorithms, the term “reliability” may prove to be a better fit. 41 Wessel and Wright, “Problems and Prospects,” 14. 42 Ibid, 14-15. 38 Garvin 16 Musicianship.” They propose that these principles not only help the user to perform on the instrument more expressively, but they also provide cues to other performers to “help players anticipate and coordinate their playing.”43 Among other guidelines, a feature akin to a MIDI panic button should be included, allowing the user to smoothly silence a process or part of a process that is not responding properly.44 This is especially important when multiple modes allow a number of processes to run at one time. Wessel and Wright share the methods they used to map gestures for a digitizing tablet (also known as a drawing tablet). They mention that, while one-to-one gestural mapping does result in subtleness for this interface, the higher resolution of the tablet benefits more from associating different regions of the tablet with different functions.45 One could conclude from this that devices that feature a higher resolution may allow for a more elaborate mapping scheme, while devices with lower resolutions may not allow the same level of complexity in their mapping schemes. Feedback Strategies As mentioned earlier, predictability is a good quality to have in a digital instrument. It provides the user with a reliable way of understanding the instrument, and it’s current state, allowing them to properly navigate the software via the interface. For most instruments, Feedback is the ideal method of providing this predictability. Popular methods of feedback range from visual (lights or graphics), audible (beeps, clicks, and other aural cues), and tactual or tactile (sensed by touch). It may be obvious that, when designing a digital musical instrument system, a specified audible feedback method is the most difficult to incorporate. However, since 43 Gil Weinberg and Scott Driscoll, "Toward Robotic Musicianship," Computer Music Journal 30, no. 4 (2006): 28. 44 Wessel and Wright, “Problems and Prospects,” 14. 45 Wessel and Wright, “Problems and Prospects,” 19. Garvin 17 the instrument is designed to respond with sound when interacted with, the music and sounds that are created while playing can serve as a primary means of feedback for the whole system. Sonic feedback in acoustic instruments almost inevitably results in physical vibrations of the interface. This feedback is informative, as each style of playing produces a different recognizable vibration cue. But electronic instruments lose this physical feedback due to the natural separation of interface and sound source. While the benefits of physical or haptic feedback are somewhat widely known, this kind of system has not yet been incorporated into many modern devices, which could be due to the complex nature of the mechanics needed to recreate such a system. Nevertheless, some researchers are making progress in the development of systems that are simplified enough to be included in commercially produced interfaces. For the purpose of a computer based, multi-function electronic instrument, a visual interface seems the best suited for providing the user with the feedback necessary to understand the system’s current state. As mentioned before by Paine, the quality of a performance is significantly diminished when the performer remains glued to their computer screen.46 Therefore, the implementation of a controller is expected to free the performer from the confines of such a limited gestural interface. However, the more complex a digital music instrument becomes, the more helpful a form of visual feedback will be in guiding the user through the software settings. Whenever possible, this visual feedback should be incorporated into the physical controller. Either by means of a small display or Light Emitting Diodes (LEDs), the current state of the instrument can be communicated to the player in an abbreviated format. A series of LEDs can, after a brief learning period, inform the user of running processes, current parameter 46 Paine, “Towards Unified Design,” 142. Garvin 18 settings, and active modes of operation without adding the need for a computer to be visible during performance. When the information required to operate the system can no longer be wholly revealed by means of lights or small displays built into the devices, or when the system calls for the use of a device which has no built in feedback, this information can be displayed on the computer screen and be referenced from time to time without lowering the quality of performance. Summary The concepts listed are intended to provide a set of guidelines that will assist in evaluating both current and new digital musical instruments. From David Wessel and Matthew Wright, we discovered that the standard interface for a computer was insufficient for live performances.47 A more appropriate interface is necessary for the audience to understand the relationship between the performer and the music being created. The section on affordances and constraints showed that while there are a wide variety of uses of term affordances, it could be difficult to talk about a specific design without using the term, prompting a more balanced incorporation of both affordances and constraints. Next, Bert Bongers noted that acoustic instruments have an interface and sound source built into each other, whereas electronic instruments lack this, being separated into two components: the physical hardware interface, and the digital, computer-based sound source.48 The necessity therefore of an efficient mapping scheme was introduced, stating that a digital instrument cannot function without one,49 while the importance of gesture communicating to the 47 Wessel and Wright, “Problems and Prospects,” 11. Bert Bongers, “Electronic Musical Instruments: Experiences,” 11. 49 Sergi Jordà, “FMOL,” 24. 48 Garvin 19 audience was also mentioned,50 underscoring the connection between gestural affordances and mapping schemes.51 David Wessel and Matthew Wright propose that when mapping an instrument’s features to those of the software, similar sounds (as well as similar controls for other functions) should be mapped to similar locations on the physical instrument.52 These guidelines assist performers in understanding one another’s intentions, creating a more unified performance.53 Finally, it was suggested by the research that the most logical and economical method of feedback was a visual system on the instrument, followed by a graphic user interface on the computer screen. From this set of guidelines, we can properly evaluate the Eigenharp and the Monome’s features, and begin to visualize a system for the keyboard controller. 50 Sile O’Modhrain, "A Framework,” 33. Wessel and Wright, “Problems and Prospects,” 14. 52 Ibid. 53 Weinberg and Driscoll, "Toward Robotic Musicianship," 28. 51 Garvin 20 Instrument Review The Monome and the Eigenharp are two instruments that are unique in their physical design and gestural controls, yet they both provide a similar method of creating music. The software applications that process the incoming data streams and generate the sound are primarily responsible for this ability. In order to determine the functionality of these instruments, and subsequently the features of the proposed software application of this thesis, the information below has been synthesized from various websites and reviews. Monome The Monome sixty-four, as described earlier, has a grid of sixty-four buttons and lights that allow the user to better understand the current status of the device. It is small enough to be held in two hands, enabling the user to reach all sixty-four points of manipulation,54 and giving the audience a better idea of how the musician’s actions affect the music.55 Repositioning the performer to be in front of the audience, rather than in front of a screen increases the communication between the performer and the audience. It is no longer the audience observing a performer, but a performer engaging the music with the audience. Much of the Monome’s flexibility is derived from the grid of buttons and lights that are housed in the device (fig 4). Figure 4 – Monome, exploded view. A user’s design of a custom enclosure for a Monome kit build. Layers from top to bottom: Faceplate, buttons, grid PCB (housing lights), logic PCB, and enclosure. 54 These lights are not one hundred percent dependent upon the buttons above them to be pressed in order to light up. They Vlad Spears, “Monome!,” 2006. Yu, Tim, "Monome," Cool Hunting, last modified January 25 2008, http://www.coolhunting.com/tech/monome.php. 55 Garvin 21 serve as status indicators for the various software applications, and are therefore separated (or “decoupled,”)56 from one another, allowing the lights to change the feedback based on the current application. Each application has its own dedicated function, and changes the behavior of the device to properly reflect the current software in response to the user’s actions. Christophe Stoll agrees that this was an important feature, stating, “it allows for infinite adaptability and a very intuitive and responsive interface.”57 This provides a level of predictability, as suggested by Wessel and Wright, and gives the performer a better understanding of the current state of the application at any given time.58 The Monome runs small, single-task applications that are programmed by the many users of the Monome59. Most of these applications are developed in the Max/MSP programming environment (fig. 5), but many other graphic programming languages, such as Bidule, Reaktor, and Pure Data, as well as text based programming environments like chucK, Ruby, Java, Python, and Processing are also able 56 Figure 5 – Various Monome Applications – Clockwise: 64fingers (Steve Burtenshaw, 2011), Polygomé (Matthew Davidson, 2011), Inkblot (Amanda Ghassei, 2012), and Straw (Matthew Davidson, 2010), http://docs.monome.org/doku.php?id=app. Chrisophe Stoll, "'Our best protection is our openness,'" Precious Forever, last modified July 9 2008, http://precious-forever.com/2008/07/09/our-best-protection-is-our-openness/. 57 Ibid. 58 Wessel and Wright, “Problems and Prospects,” 14. 59 While programming for the Monome can prove to be somewhat of a task, one does not have to program in order to use existing applications to create music. Garvin 22 to interpret these data streams. This enables programmers unfamiliar with Max/MSP to still conform the device to suit their needs.60 The notion of a user-based development group would seem to suggest an increase in the chance that an application would be poorly documented, or organized, resulting in a difficult or impossible to run application. While this can be the case in some of the applications, Monome users promote and praise user-friendliness in application design, and the nature of the programming software is one that allows quick modifications of both the user interface as well as the interior core of the application. This user-driven design also contributes to building this community, freely giving advice and tips on developing applications for the device. Furthermore, the user-base is willing to provide advice and troubleshooting of applications to new users, as well as assistance in the initial setup.61 Despite these features, the Monome does have some downsides. The setup is a bit complicated, requiring the manual install of 3 pieces of software, and downloading the various Monome applications. On top of that, it takes many hours of practice and getting familiar with the unit, discouraging some novice users from continuing with the device. One important downside to note in particular is the resolution of the instrument. Each button only senses the on/off state. Because there is no standard developed for application design, it can be difficult to tell if the Monome is set up incorrectly, or if the application is set up improperly, or if the user is not properly operating the software. And while Crabtree and Cain’s business ethics are more environmentally conscious, the decision to source their materials from local shops raises the overall prices of the Monome, and extends their production time as well as the time between 60 Nick Rothwell, "Monome 64," Sound On Sound, September 2012, http://www.soundonsound.com/sos/sep08/articles/monome.htm. 61 Josh Boughey “Monome,” Tape Op, no. 62, November/December 2007, 83. Garvin 23 production runs, which Rothwell points out.62 The cheapest unit that is currently sold is usually priced around $500. Some of those who are interested in the device may be able to afford this and appreciate the environmental nature of the production process, but the price range of these devices greatly narrows the Monome’s marketability. Software Features: Monome As mentioned before, many of the applications that are built for use with the Monome are programmed in Max/MSP. This software is an object-based graphical programming environment. Each object contains a tiny piece of code which, when interconnected with other objects, can be used to create programs for the Monome, or other devices and purposes. While this software is expensive, a free version exists which can run the applications, but not edit them. Because of the way Max/MSP is organized, the program has a great amount of flexibility in the applications it can create for the Monome. The Monome’s functionality lies mainly in the software, providing the user with a variety of ways to generate or process sound. The two-value (on or off) buttons provide a suitable surface for triggering samples or playback positions, but lacks the velocity values necessary to allow a convincing performance when playing chords and melodies. The Eigenharp, however, has more expressive gestural inputs, allowing for greater genuineness when playing these types of instruments, but also allow for triggering points within the software. Eigenharp The Eigenharp Alpha is a recently introduced instrument that was invented by the UKbased company Eigenlabs, founded by John Lambert. The Pico, their smallest instrument, the Tau, their mid-range model, and the Alpha, the company’s flagship model, make up Eigenlabs’ 62 Ibid. Garvin 24 product line. Because the Tau and Pico are comparatively more limited in their abilities, this thesis will focus on the Alpha in order to look at the complete feature set provided by the software. The Alpha features an upper keyboard of five rows, or courses, with twenty-four keys each, for a total of 120 keys. Each of these keys senses vertical pressure as well as tilt position in four directions.63 The lower keyboard houses twelve percussion keys in a single course, and are built to withstand a more powerful “thump” while being played. They also rock back and forth, giving an extra mode of expression to percussive playing. Finally, about an inch or so below this row of keys is a small “mute” key which mutes key presses on the Eigenharp rather than muting the computer’s output. These keys are sampled at 2000 times per second, giving precise position among the 1,024 available values.64 The breath pipe also has a high sample rate, but has almost twice the range of sensed values.65 Many of the reviews for the instrument spoke well of the controller’s high resolution, suggesting that it increases the genuineness of the resulting sound.66 David, a commenter on Peter Kirn’s Create Digital Music article, noted that the newness of the instrument meant that most virtual instruments are still configured for the MIDI resolution of 128 values.67 It is likely, however, that as this instrument becomes more widely used, more software companies will begin updating their software instruments to accommodate this unique feature. Spectrasonics, the makers of Omnisphere, released an update in 2011 that better interfaced with 63 Eigenharp Alpha product page, Eigenlabs. Ibid. 65 Ibid. 66 Robin Bigwood, “Eigenlabs Eigenharp Alpha,” Sound On Sound, November 2009, http://www.soundonsound.com/sos/nov09/articles/eigenlabseigenharpalpha.htm. 67 Peter Kirn, “Hands On Eigenharp: Exploring an Innovative New Digital Instrument,” Create Digital Music, June 25 2010, http://createdigitalmusic.com/2010/06/hands-on-eigenharpexploring-an-innovative-new-digital-instrument/. 64 Garvin 25 the polyphonic aftertouch mode,68 allowing keys to be individually modulated and pitched on a per-key basis by sending each key on a separate MIDI channel, providing up to sixteen-note polyphony. Geert Bevin was mostly disappointed in the strip controllers (both of which are on the upper half of the instrument), claiming their placement, together with the amount of pressure required to trigger them makes it difficult to use them while standing.69 The Alpha pivots around the center, so there is no counter-pressure on the lower half of the instrument, as both your hands are on the upper half. Furthermore, he also states that “there are dead areas at the top and bottom, and it’s easy to slide too far, interrupting the expression…”70 He shows disappointment over the fact that they aren’t pressure sensitive, as that would be one more method of control, allowing for even more flexibility.71 Each of these gestural inputs generates a large amount of data, which is handled by the Base Station. This unit, a small box that connects the Eigenharp to the computer, has a “highspeed wired connection, with error-correction and locking connectors.”72 The unit prepares the high volume of data coming from the Eigenharp and routes it to EigenD, the Eigenharp’s host software. The software is the sound engine of the Alpha, and hosts all the virtual instruments. Robin Bigwood pointed out that the main method of control over the customization of the Eigenharp’s features was not by using the computer’s QWERTY keyboard and mouse, as one 68 James Lewin, “Spectrasonics Updates Unleas[h]es Eigenharp Power," Sonicstate.com, last modified February 21 2011, http://www.sonicstate.com/news/2011/02/21/spectrasonics-updatesunleases-eigenharp-power/. 69 If playing seated, however, Bevin does note that the instrument seems to position the player naturally like they would if they were playing a real cello. 70 Geert Bevin, "Eigenharp Alpha review after three months," EigenZone.org, June 21 2010, http://www.eigenzone.org/2010/06/21/eigenharp-alpha-review-after-three-months/. 71 Ibid. 72 Ibid. Garvin 26 would initially think. Instead, a note-based programming language called Belcanto was designed to use the Eigenharp keys to “type” a musical sentence that programs the device.73 Understanding this language is not required to play the Eigenharp, but it is the only way to create and edit configurations.74 Bevin does mention a “workbench application” that was still in development by Eigenlabs75 which allows the user to customize the alpha in a modular programming environment, similar to Max/MSP, using the mouse and keyboard rather than the Eigenharp’s keys. Finally, while the Tau and Pico were designed to provide a less costly introduction to the instrument and it’s concepts, the Pico still costs in the neighborhood of $800 US dollars. This places the main and most functional of the three, the Alpha, at a price tag of roughly $7,000 USD. Summary Reviewing the features of the Monome and the Eigenharp, we have seen things that would be considered strengths of these two instruments. First of all, they both provide visual feedback, giving the user the current status of the instrument. The software does display feedback on processes and functions, yet the lighted keys on either is intended to serve as the primary means of feedback. These may be slightly difficult to understand at first, but as the player becomes more familiar with the instrument, the lighted patterns will begin to make more sense, freeing the musician from constantly referencing the computer screen. Including visual feedback (lighted or otherwise) into a digital instrument would be a high priority when proposing a new design, but would be difficult to modify existing hardware to allow for this kind of feedback. 73 Robin Bigwood, “Eigenlabs Eigenharp Alpha.” Ibid. 75 At the time of Bevin’s review, the workbench application had not been released. For more information, visit http://www.eigenlabs.com/wiki/2.0/Introduction_to _Workbench/ 74 Garvin 27 Both the Eigenharp and Monome support the ability to run multiple functions, switching between them as they run in the background.76 These applications range from loop sequencing, recording audio loops and the player’s gestural movements, and drum sequencing on the Eigenharp, to the more generative algorithms and sample cutting on the Monome. Unlike acoustic instruments, the functions on digital musical instrument can continue running without direct influence of the user. Both systems, allow for the use of 3rd party audio plugins, expanding the user’s choice in playable sounds for their performance. They also, in most cases, follow Wessel’s guidelines of pitch mapping, placing similar sounds within proximity of one another on the instrument.77 However, the Monome’s buttons are not velocity sensitive, so the size of the gesture will not always correspond to the size or importance of the resulting sound. But, because they are physical interfaces, separated from the interface of the computer, they can serve as a communication of the player’s intent in the musical result.78 The features listed here serve as a set of guidelines from which one can design a software system for a digital musical instrument. The Eigenharp’s ability to trigger loop playback, the ability to record and toggle loops of user key action and vocal loops, and the ability to natively host virtual instruments, combined with the mode switching function allow the Eigenharp to perform interesting and complex textures. In addition, the Eigenharp’s higher resolution affords the user a more nuanced control over the expression of the digital samples. 76 In the case of the Monome, one has to run a special “Pages” application in order to redirect the Monome data from one application to another. More information can be found at http://docs.monome.org/doku.php?id=app:pages. 77 Wessel and Wright, “Problems and Prospects,” 14. 78 O’Modhrain, "A Framework,” 33. Garvin 28 The Monome is expressive in its own distinct way. The nature of the device has generated a plethora of music making applications, increasing the overall adaptability of the device. The Monome, instead of natively hosting the software instruments, uses virtual MIDI devices to route information to the standalone instrument or other host software. The Monome also boasts non-traditional methods of creating music, allowing the musician to generate a different kind of texture in a familiar fashion. Garvin 29 System Proposal Many of the features found within both the Eigenharp and the Monome systems would, when combined, create a platform on which to begin building a system for the creation of similar music. These features range from creating MIDI and audio loops to using the hardware to control a generative algorithm. While designed for their respective controllers, these attributes can be easily translated to other control surfaces, as much of the multi-featured functionality is made possible by the software. Most of the Monome applications simply have several points that need to be triggered. Even though the Eigenharp is a highly unusual controller, many of the processes that it controls are simple ones, which record loops, choose buffer banks, and control volume and playback. It is true that higher resolution and multiple methods of data input do make for a more expressive system, and feedback that is embedded into the controller provides a great deal of additional freedom, however many of the music making capabilities are not entirely exclusive to these particular characteristics. While it is true that the Eigenharp and the Monome are two unique, non-traditional controllers, they are both controllers. The essence of such a device is that, in its most basic form, it performs the same functions as a computer’s keyboard and mouse. Instead of having to move the cursor to click a point within the software, the user simply presses the button associated with that point, and the task is executed. Thus it is only a manner of re-mapping control of the software from one hardware device to another. The author proposes that, if a program is already mapped to the controls of one device, it can be assigned yet again to work with another controller. Because of the nature of many of the Monome applications and EigenD, as well as the fact that these are two separate software programs, the system proposed will be designed from the ground up. Garvin 30 The type of controller specifically chosen for this digital music system is the more traditional piano-style keyboard controller. This method of controlling music is a familiar method to many musicians, and along with the availability of and pricing of these controllers, more users will be able to quickly understand and gain control of this music system. Preferable gestural features would include velocity sensitive keys, aftertouch, pitch bend and modulation wheels, faders, rotary encoders, and drum pads. These controls would still be familiar to many users, and would allow for an efficient mapping scheme. A simple search of the Internet will find a variety of MIDI controllers for well under $200. Some companies like Akai and Korg manufacture small, travel sized controllers with these controls, averaging around $55 each. Gestural Control and Mapping Gestural control is the primary means of generating data to manipulate parameters within the sound generating software. Simple interpretations of gestures, like pressing a key, or moving a continuous controller, generate one data set, while more complicated interpretations, such as the distance or interval between keys (referred to as a “span” gesture by some), or the speed of moving a continuous controller (discerned by the time taken to move from one value to the next) can generate another data set which can be used to manipulate other parameters and software values. To provide a level of customization, and to increase the level of compatibility, it would be beneficial to implement a MIDI-learn function. This would allow the user to click on a parameter in the software, and then press the desired control, assigning that control to that parameter. Not only would this facilitate the setting up of individual keyboards, but it would also allow the user to program their setup to match their preferred method of control. The notion of creating a sort of library of controllers, or something akin to Novation’s “Automap” software, Garvin 31 was considered to be included in the design of this system, but upon further deliberation, it was decided that, for this particular application, MIDI-learn would be more beneficial because it allows the user to create a custom mapping scheme to fit their performance preferences. These mapping schemes could then be saved, recalled, and edited as needed. Application Control Methods A music software application would need not only to have control over musical processes, but also the navigation of the interface. Selecting different instruments or modes of operation via the controller would be ideal to keep the user engaged in the music. One possible method of control could conceivably multiply the number of points available on a Figure 5 – Shift Function diagram – Each color represents a new destination for controller data. Each destination refers to a key being pressed. single device. By using a function buttons (or even MIDI keys in the extreme low and high ranges) as a “shift” key, encoder information would be temporarily rerouted from one destination to another, changing the function of keys, faders, knobs, and other input controls. By adding a second shift key, this doubles the amount of functions, allowing for a total of four destinations for the data. Each key independently re-routes the information to a separate destination, while pressing both simultaneously adds a third destination. Using these shift functions, the rotary encoders could, at one shift level, be programmed to select a specific mode, or a specific item within that mode. Additionally, using two rotary knobs in the style of an Garvin 32 Etch-a-Sketch toy, the two knobs can be used as left-right and up-down navigation methods for selecting recording banks or other processes. Possible Applications and Compositional Methods Much of the uniqueness of the Monome comes from the number of applications that allow unique methods of creating music. Some of these applications are generative, setting of a sequence of notes or sounds when a button is pressed, instead of a single note. Others produce an ambient texture by freezing and repeating incoming audio slices or MIDI notes. Still others are dynamic, allowing the user to insert their own loops and control their playback point. While the goal is not exactly recreate these applications, they can serve as inspiration for the keyboard system. Below are several applications that have been inspired either by current Monome applications, current Eigenharp functions, or from the author’s work in Max/MSP. Looper/Sequencer Being a familiar and favorite method for building a song in a live situation, this application will be one of the primary functions within the software. It will exist as its own mode, but also may be incorporated into other modes. This application will provide the user with the ability to provide multiple layers of audio and MIDI loops, of varying lengths. The user will also have the option, for MIDI loops, to quantize their playing to the nearest, 8th, 16th, or 32nd note beat. The option to quantize to a specific scale pattern or key signature will not be made available, in an effort to promote more skillful playing. Because this is the case, a loop deletion feature (or loop cancel) will be implemented, allowing the user to erase mistakes or cancel the current loop by the press of a button. Additionally, a “pages” or “scene” feature will be implemented in order to allow the musician to build different sections of their song. The user will be provided with a mix section, and will be able to control volume, panning, the solo status Garvin 33 of each individual loop, and be able to overdub multiple passes on one loop, as well as merge separate banks into one bank In addition to the volume mixer, pressing the shift key would assign the piano keys to choosing loop buffers, the faders to control send levels to various effects, and the rotary encoders to a simple equalizer for each track. Output Mixer The output mixer will be similar to the mixer unit listed in the looper application above. Each function within the application is assigned to a specific channel, with faders controlling volume of independent channels, while knobs control the pan settings of each channel. These levels and settings will be reflected in the software within the mixer page, as well as the individual function’s pages. Singing Bowls For this application, the musician would choose the pitch of the bowl by pressing a key, and spinning a knob. In spinning the knob, the system calculates an average speed by taking a sample of time intervals between value changes, then averaging them to find the overall speed. This number can then be used to determine either the attack time and/or the volume at which a “bowl” would “virtually” spin. Duration of the spin could be a factor of speed, or the note could continue to spin until the user spins that knob in the opposite direction. It would be important to note that spinning a control knob without pressing a key does not trigger a “spin” within the software. This allows the knob to be “reset” without disturbing the outgoing audio stream. Each bowl would also have its own volume fader within the function page, and would also feature a variation of the looper application. For this application, controllers with “endless” encoders would be ideal, but not necessary. Additionally, disc jockey controllers, such as the DJ-Tech Garvin 34 CDJ101, and the Numark DJ2Go may provide a more intuitive feel for this particular application, but would also increase the cost and only be usable in this particular function. Digital Chimes This application (fig. 6) is also more ambient than melodic or rhythmic, and is designed to resemble the sound of wind chimes on a windy day. The core of the application is centered on a semi-random series of triggers. The speed of these triggers can be increased or decreased by a knob, simulating wind speed. The sound is either sent to a third party audio plugin, or can utilize built in Figure 6 – Max patch that served as an inspiration for Digital Chimes – Created by author. sound and noise generators. The pitches of the individual “chimes” are turned on and off by pressing the desired key on the keyboard and once a certain number of pitches is reached, each subsequent pitch that is turned on causes the pitch that has been held the longest to be turned off. The user would also be able to set the amplitude envelope of the applications, giving a kind of uniformity to the chimes. This could serve as an ambient background sound, generating tones in key with the composition, or generate, a series of clicks and taps, adding an arrhythmic bed of percussive sounds. Feedback As was stressed in earlier sections, some form of visual feedback is a vital part of this particular system. Unfortunately, very few modern controllers have feedback built into them, and modifying one of them to include visual feedback would be an extensive, costly project. As a result, almost all of the feedback will need to take the form of an on-screen graphic user interface (GUI). Each visual element within the system will have a dynamic priority level, needing to be Garvin 35 seen at certain times, yet reduced or hidden at other times. Elements like master output meters will have the highest priority, and need to be full size and visible at all times. Elements such as the master meters of individual processes do not need to be full size at all times, but probably should be visible and readable at all times, to know if an individual process is overloading its meters or not. When switching between processes, the interface should indicate a number of things. First, when the “mode key” is pressed, the display of the interface should very strongly indicate that a “mode change” of sorts has been activated. Changing the colors or size of the elements would assist in conveying this message. Secondly, indicating the currently selected interface with text, along with color and shading changes would greatly reduce confusion as to where the user is within the interface. Apple Corporation’s Exposé and Spaces are good examples of this particular feature. Finally, in choosing a process, that process should become visually dominant, and not be distracted by other elements. Summary While the instrument is still incomplete, it is beginning to take shape. Additional applications of varying functions would add to the complexity of both the instrument system and the resulting music. The applications listed show the keyboard’s ability to control non-traditional processes. If these applications were placed inside a larger application, they would allow the performer to add unique sound textures to their musical performances. Additionally, these processes could possibly even be routed to control external processes, like lights or other visual effects, increasing the overall energy of a performance, and more fully engaging the audience. Garvin 36 Closing Discussion Throughout this paper, it has been shown that the various parts of the field of human computer interaction such as affordances and constraints, gestural mapping, and feedback play a vital role in the design of a digital musical system. Though perhaps not explicitly thought of in this way during the design process, these concepts come in contact with every step of development. Furthermore, the discovery was made that while designing a system for musical performance, one must provide features and feedback which allows the musician to A) better understand the state of the instrument, and B) convey musical intent to an audience through the their interaction with the interface. From this, we found that a greater understanding of these concepts helps one to evaluate current digital musical instruments, and provides a more informed process of design. Next, the author evaluated the hardware and software features of both the Eigenharp and the Monome. While both of these systems are physically unique, and provide a non-traditional way of creating and performing music, it is primarily the software, not the hardware that provides most of their functionality. It defines the overall function of the system, while the hardware is designed in such as way that facilitates the control of the application. From this, the author suggested that the methods of music creation implemented in the Eigenharp and the Monome systems, while unique, could easily be re-mapped to the controls of a standard MIDI keyboard controller. These controllers are more widely made, and are therefore more affordable, and more available, allowing more musicians and performers to gain access to this particular style of music creation. Additionally, the idea of creating customized applications for the MIDI keyboard controller will hopefully spread, producing new and innovative ways of generating sound in a performance environment. These applications could range from methods Garvin 37 of control for sound, as well as a means of algorithmic sample playback and lighting for theater and sound design. Overall, the design of a system such as the one proposed would combine features of both the Eigenharp and the Monome, and would place expanded usability behind the controls of a standard keyboard controller. For example, the Monome, while the resolution of the buttons is low, the range of sounds it can create is wide and varied. Additionally, the Monome provides alternative, non-traditional methods of control to the user, expanding creativity in a way not typically afforded by other controllers. The Eigenharp, on the other hand, provides masterful, even virtuosic control over native sounds, and allows the performer to play a melodic instrument, loop that selection, and play another line on a different instrument, and continue building these layers to create an energetic, engaging performance. While not particularly new or ground breaking, the proposed system combines features of two successful instruments, and allows other users to experience and create music in a similar manner, engaging their audiences while not having to directly interface with a computer during a live performance on stage. Garvin 38 Bibliography Apple, Inc. “A Brief History of the Synthesizer.” Logic Express 9 Instruments [help file]. Apple, Inc. Last modified 2009. http://documentation.apple.com/en/logicexpress/instruments/index.html#chapter=A%26 section=5%26tasks=true. Bevin, Geert. "Eigenharp Alpha review after three months." EigenZone.org. June 21 2010. http://www.eigenzone.org/2010/06/21/eigenharp-alpha-review-after-three-months/. Robin Bigwood. “Eigenlabs Eigenharp Alpha.” Sound On Sound. November 2009. http://www.soundonsound.com/sos/nov09/articles/eigenlabseigenharpalpha.htm. Bert Bongers. “Electronic Musical Instruments: Experiences of a New Luthier.” Leonardo Music Journal 17. (2007). 9-16. Boughey, Josh. “Monome.” Tape Op, no. 62. November/December 2007. 83. Costall, A. “Socializing Affordances.” Theory and Psychology 5, no. 4 (1995): 467–481. Quoted in Thor Magnusson. “Designing Constraints: Composing and Performing with Digital Musical Systems.” Computer Music Journal 34, no. 4 (2010): 63. Eigenharp Alpha product page. Eigenlabs. Accessed February 15, 2012. http://www.eigenlabs.com/product/alpha/. Gibson, J. J. The Ecological Approach to Visual Perception. Boston, Massachusetts: Houghton Mifflin, 1979. Quoted in Thor Magnusson. “Designing Constraints: Composing and Performing with Digital Musical Systems.” Computer Music Journal 34, no. 4 (2010): 62. Garvin 39 Jordà , Sergi. "FMOL: Toward User-Friendly, Sophisticated New Musical Instruments." Computer Music Journal 26, no. 3 (2002): 23-39. Peter Kirn. “Hands On Eigenharp: Exploring an Innovative New Digital Instrument.” Create Digital Music. June 25 2010. http://createdigitalmusic.com/2010/06/hands-on-eigenharpexploring-an-innovative-new-digital-instrument/. Lambert, John. “Eigenlabs.” John Henry Lambert. John Lambert. Last modified July 2009. http://johnhenrylambert.com/projects/eigenlabs.html. Lewin, James. “Spectrasonics Updates Unleas[h]es Eigenharp Power." Sonicstate.com. Last modified February 21 2011. http://www.sonicstate.com/news/2011/02/21/spectrasonicsupdates-unleases-eigenharp-power/. Magnusson, Thor. "Designing Constraints: Composing and Performing with Digital Musical Systems." Computer Music Journal 34, no. 4 (2010): 62-73. Monome. "Orders." monome.org. Accessed May 24, 2012. http://monome.org/order. Moog, Robert. "Memories of Raymond Scott." Raymond Scott Archives. Jeff E. Winner. Last modified 2012. http://raymondscott.com/#293/custom_plain. Norman, D. A., The Design of Everyday Things. New York: Doubleday, 1990. pp. 12–13. Quoted in Sergi Jordà. "FMOL: Toward User-Friendly, Sophisticated New Musical Instruments." Computer Music Journal 26, no. 3 (2002): 27. Norman, D. A. The Psychology of Everyday Things. New York: Basic Books, 1988. Quoted in Thor Magnusson, "Designing Constraints: Composing and Performing with Digital Musical Systems." Computer Music Journal 34, no. 4 (2010): 63. O'Modhrain, Sile . "A Framework for the Evaluation of Digital Musical Instruments." Computer Music Journal 35, no. 1 (2011): 28-42. Garvin 40 Paine, Garth. "Towards Unified Design Guidelines for New Interfaces for Musical Expression." Organised Sound 14, no. 2 (2009): 142-155. Kara Pernice. "About Don Norman - jnd.org." Don Norman's jnd.org website / human-centered design. Accessed May 16, 2012. http://jnd.org/about.html. Rothwell, Nick. "Monome 64." Sound On Sound, September 2012. http://www.soundonsound.com/sos/sep08/articles/monome.htm. Schloss, W. A. “Using Contemporary Technology in Live Performance: The Dilemma of the Performer.” Journal of New Music Research 32, no. 3 (2003): 239–242. Quoted in Sile O’Modhrain. “A Framework for the Evaluation of Digital Musical Instruments.” Computer Music Journal 35, no. 1 (2011): 32. Spears, Vlad. "Monome!" Vlad Spears – Pterodactyls, Blue and You. Vlad Spears. Last modified April 23, 2006. http://vladspears.com/2006/04/monome/. Stoll, Christophe. "'Our best protection is our openness." Precious Forever. Last modified July 9 2008. http://precious-forever.com/2008/07/09/our-best-protection-is-our-openness/. Vera, A. H., and H. A. Simon. “Situated Action: A Symbolic Interpretation.” Cognitive Science 17 (1993): 7–48. Quoted in Thor Magnusson, "Designing Constraints: Composing and Performing with Digital Musical Systems." Computer Music Journal 34, no. 4 (2010): 63. Weinberg, Gil, and Scott Driscoll. "Toward Robotic Musicianship." Computer Music Journal 30, no. 4 (2006): 28-45. Garvin 41 Wessel, David, and Matthew Wright. "Problems and Prospects for Intimate Musical Control of Computers." Computer Music Journal 26, no. 3 (2002): 11-22. Yu, Tim. "Monome." Cool Hunting. Last modified January 25 2008. http://www.coolhunting.com/tech/monome.php.