课程

    Creating New Interfaces for Musical Expression

    Friday, 22 November

    14:15 - 18:00

    Room S424

    Advances in digital audio technologies have led to a situation where computers play a role in most music production and performance. Digital technologies offer unprecedented opportunities for the creation and manipulation of sound, however the flexibilty of these new technologies imply an often confusing array of choices for musical composers and performers. Some artists have faced this challenge by using computers directly to create music and leading to an explosion of new musical forms. However, most would agree that the computer is not a musical instrument, in the same sense as traditional instruments, and it is natural to ask 'how to play the computer' using interface technology appropriate for human brains and bodies. In 2001, we organized the first workshop on New Interfaces for Musical Expression (NIME), to attempt to answer this question by exploring connections with the better established field of human-computer interaction. This course summarizes what has been learned at NIME which has been held annually since that first workshop.

    We begin with an overview of the theory and practice of new musical interface design, asking what makes a good musical interface and whether there are any useful design principles or guidelines available. We will also discuss topics such as the mapping from human action to musical output, and control intimacy. Practical information about the tools for creating musical interfaces will be given, including an overview of sensors and microcontrollers, audio synthesis techniques, and communication protocols such as Open Sound Control (and MIDI). The remainder of the course will consist of several specific case studies representative of the major broad themes of the NIME conference, including augmented and sensor based instruments, mobile and networked music, and NIME pedagogy.


    Level

    Beginner


    Intended Audience

    The course is designed to have a broad appeal to interaction designers, game designers, artists, and academic and industry researchers who have a general interest in interaction techniques for multi-modal musical expression.


    Prerequisites

    Attendees should be familiar with the basics of interactive media, but do not need any particular technical background. No background in music or computer audio is assumed.


    Presenter(s)

    Michael Lyons, Ritsumeikan University
    Sidney Fels, The University of British Columbia


    Dr. Michael Lyons has worked in computational neuroscience, pattern recognition, cognitive science, and interactive arts. He was a Research Fellow at Caltech (1992/3), and a Lecturer and Research Assistant Professor at the University of Southern California (1994/96). From 1996-2007 he was a Senior Research Scientist at the Advanced Telecommunications Research International Labs in Kyoto, Japan. He joined the new College of Image Arts and Sciences, Ritsumeikan University, as Full Professor, in 2007. Michael co-founded the New Interfaces for Musical Expression conference.

    Dr. Sidney Fels has worked in HCI, neural networks, intelligent agents and interactive arts for over ten years. He was visiting researcher at ATR Media Integration & Communications Research Laboratories (1996/7). His multimedia interactive artwork, the Iamascope, was exhibited world-wide. Sid created Glove-TalkII that maps hand gestures to speech. He was co-chair of Graphics Interface ’00. He leads the Human Communications Technology Laboratory and is Director of the Media and Graphics Interdisciplinary Centre. Sid co-founded the New Interfaces for Musical Expression conference.