Julian Vogels

Music Technologist

I design new kinds of instruments, physical and digital. And algorithms. And Apps.

  What does it mean to design a digital musical instrument?
DMIs are composed of two separate parts: The physical interface, where the interaction takes place, and the sound synthesis on the computer.

Its design implies the knowledge of a musician’s gestural repertoire, the thoughtful application of usability principles and the implementation of sound synthesis and signal processing.

The two parts are brought together with the best possible mapping strategy, which is the most important part of any DMI design.

  Why is mapping so important?
Digital musical instruments brought us the freedom of choosing what we want our controller to sound like. The same device can sound like a bowed string, a wind instrument or a free reed.

But that brings a lot of questions with it. Isn’t it very difficult to learn a new instrument, if the musical gestures don’t intuitively correspond to the logic of the instrument?

The art of choosing the mapping between data signals from sensor input on the device to the sound synthesis engine is a crucial part of bringing a prototype device to success.

  Why embedded systems?
Embedded systems describe a digital musical instrument that is in one piece, works stand-alone and is usually portable.

The sensor data acquisition, the processing of the sensor signals, the mapping and the sound synthesis are carried out in the same physical space.

This is especially interesting for musicians that perform on stage, as they don’t have a long setup time for their gear, as they would with a cabled laptop.
On the other hand, embedded systems are the first step to a marketable consumer product that one day leaves the lab and makes many people happy.


You’ve always hated to practice with a metronome? Well, not anymore. Together with my colleagues from Soundbrenner I develop the first wearable device for musicians – a metronome that you will actually love using.

Instead of that annoying click sound, the Soundbrenner Pulse vibrates to the beat, so you can train your sense of rhythm more naturally. Wear it with a strap or clip it anywhere with magnets. It connects to our iPhone and Android app, packed with tons of features, including many rhythm exercises. In the long run, we want to become the fitness tracker for music, motivating you and tracking your progress.

But we need your support to make it happen! Everything is sorted out, but we don’t have the funds yet to really manufacture it. Share our campaign and of course get one for yourself ;)

Indiegogo campaign 

More great news: I just officially passed my Master’s degree in Music Technology at McGill University Montréal. The thesis deals with Harmonica performance gestures and features an extensive patent review as well as a motion capture study.
I analysed the gestures of five expert harmonica players and drew conclusions for the design of electronic harmonica-type digital musical instruments.
Then, I built a prototype based on the awesome pre-existing device called Jamboxx.

Interested in working with me? Then just shoot me a message, I’m definitely coming over for a coffee.

Be sure to have a look on my Curriculum, and on my   LinkedIn or  Xing profiles.

Latest post:

Featured Projects

Hands-On Experience

Co-Founder and Head of Product at Soundbrenner Ltd., Berlin/Hong Kong/Shenzhen, 2014-09-15 to present

  • Research & Development
  • Product development at Chinese Design Studio Rone
  • Electrical Engineering for Rapid Prototyping
  • UX Design and software conceptualisation
  • Firmware programming

Graduate studies in Music Technology, McGill University Montréal, Canada, 2012-09-04 to 2014-12-15

  • MA Project “Harmonica-inspired digital musical instrument design based on an existing gestural performance repertoire”, including the development of an augmented musical controller
  • iPhone app development for Montréal Sound Map
  • Motion Capture Studies on Harmonica Performance Gestures
  • Implementation of a software for comparing the effect of fractional delay methods on digital audio effects
  • Java application for the visualization of the jMIR ACE XML file contents (music information research)
  • Study on the boundary conditions of two dimensional Digital Waveguide Meshes
  • Conference manager of Time Forms: The temporalities of æsthetic experience
  • Design of the Filumis II, a mono cord instrument with complex mapping and a scanned synthesis engine
  • Design of the Lyrabox, an embedded gestural controller and sound synthesis box, running GPIO sensor acquision, Libmapper and PureData on a RaspberryPi
  • CIRMMT Student and member of the Input Devices and Music Interaction Lab
  • McGill Graduate Excellence Fellowship Award

Undergraduate studies in Multimedia und Kommunikation, University of Applied Sciences Ansbach, Germany, 2008-10-01 to 2012-06-22

  • Graduated with Honours
  • BA Thesis Project: Conception, construction and implementation of the interactive multimedia installation Maskenrad (Bachelor Project)
  • Development of an innovative musical instrument using Max/MSP, a micro controller and the sensor data of a smartphone device’s accelerometer
  • Design of game sound particularly for mobile devices
  • Project management for the in-university TV-Studio online presence (HTML5/AJAX/SOAP, Team: 9 persons)
  • Director of an in-house TV studio production (Team: 17 persons)
  • Programming of a client server application using Java Enterprise Edition (eCommerce)
  • Programming of an Android app
  • Recording of an EP from conceptional design to mastering
  • Composition of an electro track
  • Several short film projects, graphic and 3D design
  • Development of an online presence using the content management system TYPO3
  • Vice president of the young social democrats and elected representative to the board of social democrats in Ansbach-City
  • Improvement of french and spanish skills
  • TOEFL Test passed with score 108 of 120

Internship, Hamburg, Germany, 2010-03-15 to 2010-09-29

  • Becoming acquainted with the architecture of a radio station
  • Cutting & motion graphics
  • Full-fledged team member of an iPhone app development
  • Presentation of work results to clients
  • Getting to know business customs

Civil Service/Stay abroad, Kpalimé, Togo, 2007-07-11 to 2008-08-18

Tutoring for children in a crèche and disabled people in a farm yard, language teaching courses; as part of a voluntary social year as an alternative civilian service program in the program of ICJA eV volunteer exchange worldwide.

At school, Crailsheim, Germany, 2003 to 2007

  • Active voluntary member of the Ratskeller team in Crailsheim (youth work, organization of events)
  • Active voluntary member of Jugendzentrum Crailsheim e.V. (youth centre) in Crailsheim (referee for culture, public youth work)
  • Referee for culture at the SMV (pupils’ administration) of the Albert-Schweitzer-Gymnasium for 3 years
  • Member of two bands as drummer and bassist
  • Co-organizer of the Rothof openair concert 2006


May 2015 MA Music Technology Convocation
2012 – 2014 Graduate studies in Music Technology at McGill University, Montréal, Canada.
June 22th 2012 Bachelor of Arts Degree (with honours) at University of Applied Sciences Ansbach, graded 1.0
1 May 2010 Begin of the internship at FUSE Integrierte Kommunikation und Neue Medien GmbH in Hamburg fuse.de
15 March 2010 Begin of the internship at Aldebaran Marine Research & Broadcast in Hamburg aldebaran.org
2008 – 2012 University of Applied Sciences Ansbach. Course of studies “Multimedia and Communication” hs-ansbach.de
June 2007 Degree: Abitur (comparable: A-Levels)Average Grade:    2.1
1998 – 2007 Albert-Schweitzer-Gymnasium Crailsheim (high school)

Language knowledge

German mother tongue
French fluent
English fluent, TOEFL iBT score 108/120
Spanish advanced basics

Other Knowledges

  • Drummer
  • Bassist
  • Sport boat driving license (sea)

Software & Hardware Knowledge

Max/MSP, PureData, Objective C/C++ (advanced, preferred), Java SE/EE (intermediate), Pro Tools, Ableton Live, Logic, Native Instruments Komplete and Maschine, libmapper.
Qualisys Track Manager, ELAN, MATLAB.

Adobe Creative Suite 5 (mainly Photoshop, Illustrator, InDesign, After Effects), Final Cut Pro X (certified in version 6.5), HTML5/CSS3/Javascript/PHP, MySQL, TYPO3 (certified), WordPress, Processing, LaTeX, iWork/Office.
Arduino, RaspberryPi, BeagleBone, SolidWorks, 3D Printing, PCB etching & surface mounting, woodworking.


  • Certified TYPO3 Integrator (Open Source Enterprise Content Management System for dynamic web pages, www.typo3.org)
  • Apple Pro certified in Final Cut Pro 6.5
  • Scientific publication at the ICMI Workshop of Mensch & Computer 2011 in Chemnitz (see program at bit.ly/rrifMb)


Gestural Controller Design90%
Embedded Systems70%
General Electronics70%
PCB Design50%
C++ / Synthesis Toolkit60%
Objective C80%
Graphic Design60%
HTML + CSS100%

Band Aid for Bands

Welcome musicians! I am at your service!


Are you looking for new ways to express yourself musically?
Are you interested in custom made electronic instruments?
Do you want to tweak your classical instrument with some electronic augmentation?
Does your band miss the special something on stage?

I can help you with that! I have plenty of good ideas ready for deployment, and with my experience in hardware and software design, we can bring your music to the next level!

In order to build the perfect new musical instrument for you, I have to know your personal way of playing and work with you and your personal preferences.
So what are you waiting for?

  My Services

  Gestural Controller Design

  Stage performance concepts

  Multimedia Shows

  Interactive Sound Installations

Soundbrenner Pulse Insight

The Soundbrenner Pulse (without band)

The Soundbrenner Pulse is a connected, wearable haptic metronome that vibrates to the beat instead of producing audible clicks. When I first heard about this idea, I was pretty excited, even if I had initial doubts that it would actually …

  Show More

The Soundbrenner Pulse is a connected, wearable haptic metronome that vibrates to the beat instead of producing audible clicks. When I first heard about this idea, I was pretty excited, even if I had initial doubts that it would actually work.

Does it work, science?

Pianist using the Soundbrenner Pulse in practiceBecause of my academic background in Music Technology and as a drummer, I know how valuable the metronome is for music education and practice in general. Humans don’t have a very accurate sense of rhythm built-in – you need to earn it through practice. But once you got a good inner sense of rhythm, you can add rhythmic nuances to your performance, which distinguish the professional musician from the enthusiast.

Unfortunately, practicing with a traditional metronome isn’t fun. The click track constantly distracts you from what you are playing, which is why a vibrational metronome with accurate haptic feedback makes sense.

Science tells us that auditory stimulus (clicks) are perceived very fast, within a fraction of a millisecond, whereas haptic stimulus (touch or vibration) is perceived with a delay of a few milliseconds (’bout 5 or so).

Of course, that raised my eyebrow. I doubted that you could really play on the beat with only vibrations as a cue. Turns out I was wrong – it works just amazingly. A few milliseconds difference aren’t enough to really notice a difference, and the benefits outweigh the slower perception by so much.

After building and testing the early prototype, I can say that this metronome is exactly what you want as a musician. We don’t need the audible click in oder to play on the beat: it just makes you feel like a robot. The haptic cue gives you a more subtle guideline, and feels more naturally, as if someone taps on your shoulder.

It makes sense, if you think about it: Making music, we often tap our foot or move our body a bit. The feeling of the vibration integrates in that body feeling. That’s why you don’t need to focus so much as you would have to with an audible click. Focus on your music instead.

We need your support to make this new metronome a reality. If you want to be among the first to try it out, pre-oder your Soundbrenner Pulse now on Indiegogo :)

The magic of connecting things

I am excited about what we can build into our app in the future. There are so many possibilities how you can make the musician’s rhythm practice better.

A rhythm editor will make it easy to build a complex rhythm pattern. You just drag and drop subdivisions like quarter, eigth notes, sixteenths or triples around and the Soundbrenner Pulse will adjust it’s rhythm for you.

There will be straight forward rhythm exercises, with which you can practice holding your rhythm over gaps, accelerando and decellerando, complex rhythm patterns and more!

The Soundbrenner Pulse (without band)A lot of musicians are practicing and recording with music software like Ableton Live, Logic Pro or ProTools. Those DAWs can send click track information out through MIDI or OSC protocols. If we are backed by enough people on crowdfunding, we will develop a Utility app which receives the MIDI or OSC data and sends it to the Soundbrenner Pulse. It even works the other way round. You can then use the touch button and the wheel to fire tracks in Live or change the wetness of your delay effect. Unlimited possibilities.

If we are backed by even more people on Indiegogo, we plan to implement a feature called Performance Feedback. The phone listens to your practice, and as we know what rhythm you want to play, a beat tracking algorithm can figure out how close you were. That way, you can use the rhythm exercises to track your progress over time. Imagine, you could see on your phone how much better you are now that just a few weeks ago. You’d have a fitness tracker for music, which motivates you and reminds you to practice your instrument. The future has arrived. That’s the way to go if you want to become a great musician!

We need your support to build all this and revolutionize they way you learn to play! Donate and pre-order your Soundbrenner Pulse now on Indiegogo :)

  Posted by julian.vogels on April 9, 2015

How to work efficiently with a Chinese product design studio


Chinese design studios are ridiculously fast when it comes to transforming your prototype into a manufacturing ready (DFM) prototype, which you can then give to a factory to mass produce. Depending on your product, you can count 2-3 months of …

  Show More

Chinese design studios are ridiculously fast when it comes to transforming your prototype into a manufacturing ready (DFM) prototype, which you can then give to a factory to mass produce. Depending on your product, you can count 2-3 months of development time only.


However, I strongly recommend you to be on site for the whole duration. As a German, I really needed to get into the different working environment and attitude. Chinese designers are lightning fast when they rush over their computer keyboard to adjust a design, but they don’t generally think for you. You have to conceptualize, plan, and communicate, and they execute fast and in a high quality. You don’t have to micromanage everything, but you should thoroughly control the work results, to be sure they didn’t get you wrong.


Here’s a few things to consider when you’re working with a Chinese product design studio:

  1. Be well prepared. When you start working with a design studio, you don’t need to have a final working prototype with all the electrical components figured out (it will change anyway), but you need a profound understanding of your market and the product’s user experience flow. You just need to know all the use cases. The design studio won’t do research for you: They need decisions, and they need them fast. For example, did you already think about the necessary battery life your product requires? Materials used?
  2. Speak Chinese or have someone you trust translate for you. You’re in the better negotiating position if you speak Chinese yourself, of course. Most designers you will be working with only speak Chinese.
  3. The work result is as good as the instruction. Precision of language is important. On a side note: Use WeChat, not Email. They don’t check email regularly.
  4. Ask one question at a time, don’t make lists of questions. Ask the next questions when you have had your answer. That’s how it works best, must be a cultural thing.
  5. Spot mistakes when looking at work results. Therefore, you need to know your shit, and stay up to date with the development process all the time. For example, I already had the case where the mould for a motor was 4 mm too short. If I wouldn’t have spotted this, we could have thrown the mockup in the trash. It was short for no reason at all, just someone assuming a size.
  6. Iterations. Chinese design studios’ development is fast and sequential. They do Industrial Design, then Mechanical Design, then Electrical Engineering. However, in reality you cannot know all the parameters of ID before EE has even started. How could you decide on a casing if you don’t know the battery size yet? So if you want quality, prepare your project manager that your product won’t be the average Chinese product, but requires iterations of redesigns and testing to ensure quality.
  7. Delegate. Chinese design studios, especially in Shenzhen I guess, have an incredible know-how. They develop products here, that’s what they do. So don’t forget to listen to them, and don’t try to do everything by yourself. You’re the expert in your field, but for all general aspects of products (materials, wall thickness, Bluetooth connections, etc.), they know better.
  8. Don’t overburden. They may work fast, but they will still get angry if you continue asking for changes to the design.

Source: I was developing the first wearable device for musicians Soundbrenner Pulse at the Design Studio Rone in Shenzhen, China, from December 2014 to the beginning of February 2015.

  Posted by julian.vogels on January 11, 2015

Exiting times! Soundbrenner Co-founding to Rapid Prototyping in Hong Kong and Shenzhen

Hong Kong Skyline (from Victoria Peak)

Since I moved back to Germany in July, many things happened in a short time. My wife and I found a nice place in Berlin-Friedrichshain, got to know the city, and were looking for work. After stumbling upon the enormous …

  Show More

Since I moved back to Germany in July, many things happened in a short time. My wife and I found a nice place in Berlin-Friedrichshain, got to know the city, and were looking for work. After stumbling upon the enormous MeetUp community of Berlin, I went to a couple of Startup meet ups. These are weird meetings of people that are not actually interested in one another, but just trying to get themselves into a company or hire the best employee. But then I met the guys from Soundbrenner, who prove to be very nice and interesting people with a great idea. Designing the first wearable device for musicians.

Then I became a co-founder. With my background in Music Technology from McGill University, I was perfectly suited for the job, and I still very much enjoy working on our product. The initial idea transformed in a small company that won a bunch of awards for their excellent pitches (Berlin Social Media Week, Start Tel Aviv, Startup Weekend and more), which brought us to Hong Kong where we became part of the BRINC Connected & Smart Device Incubator program as one of the first early-stage startups ever.

The details of the product are still disclosed until our crowd funding starts on 31 March 2015.

Hong Kong is a great place for building up a company, with tremendous opportunities. In this area you find everything you need: Hong Kong for banks, legal stuff, and race horses, Shenzhen (just across the border to China) is the world capital of consumer electronics prototyping and manufacturing, Guangzhou is a great place for figuring out distribution.

Our prototypes evolved, and after numerous repeated product testing sessions with musicians we learned so much about how musicians can improve their daily practice routine with our product. The design for manufacturability (DFM) is roughly finished by now, and we can say with confidence that we are able to produce this product for the masses soon. It’s exciting to see how an idea becomes reality.

If you want to follow our progress, just head over to soundbrenner.com and subscribe to our newsletter!

  Posted by julian.vogels on November 29, 2014  /  Tags: , , , ,

Relayr WunderBar Hackathon 2014: 2nd price winner


My second hackathon here in Berlin was absolutely cool. The location on the 4th floor of Betahaus fitted very well, as it was small and cozy, with way less people than on the Berlin Music Hack Day. That’s why it …

  Show More


My second hackathon here in Berlin was absolutely cool. The location on the 4th floor of Betahaus fitted very well, as it was small and cozy, with way less people than on the Berlin Music Hack Day. That’s why it was so easy to get to know so many interesting people, including the relayr team, which made the whole thing possible.

Relayr will make the WunderBar publicly available in late October, but we had the opportunity to lay hand on it now already. The WunderBar can be broken into seven parts: One master module that connects via Wifi to the relayr cloud, and six sensor modules, which transmit the values to the master module (or elsewhere) via Bluetooth LE. The data can be monitored online, or be used with SDKs for web apps, iOS or Android.

When I arrived at Betahaus, I didn’t have a concrete idea yet, or a team. I just told the croud that I’d like to bring music to the internet of things, using the WunderBar and called the project WunderSound.
Khaled, who just came over from Egypt and is new in Berlin like me, joined the WunderSound Team, as well as my Feel The Beat collegue Jose. We brainstormed for a couple of hours first. It wasn’t easy at first because we had so different hacking  approaches. Khaled is developing on Windows, and had in mind to use a Kinect for his hack. I myself wanted to focus on iOS app development this time. Jose supported the team with his great industrial design skill in the idea finding phase, but unfortunately had to work on an urgent project afterwards, until we met him again the next day in the morning.

WunderBar technical info

The relayr WunderBar modules (graphic from relayr.io)

The WunderBar is a tool for app developers to get into the world of the Internet of Things (IoT), and for me to combine music and home automation. Thinking of where at my home there was music already, I immediately thought of my old 70’s washing machine that I had in Montréal. The old machine was a bitch, because it always jumped around so that we had to run to the kitchen and sit on it until the spin cycle was over. At the same time, sitting on that machine, you could literally feel the crazy polyrhythm that it makes while spinning.
Using the accelerometer module of the WunderBar, you could measure the movement, turn it to a digital polyrhythm and jam with your washing machine! Wouldn’t that be awesome? So let’s get to work! Khaled’s skills were a good match, because the Kinect is already in so many living rooms and could be used as a musical instrument to jam with the machine.

The WunderSound hack includes an iPhone application that connects to the relayr cloud, to get the streaming accelerometer data from the module, and a Kinect application, which uses an Arduino to send gesture recognition controls through the Bridge module of the WunderBar to the relayr cloud, so that the iPhone could ultimately use those controls to map it to sound synthesis parameters.

Unfortunately, relayr had some problems with the onboarding of the high number of WunderBars for the hackathon, and some features were not yet implemented in the iOS SDK. That’s why the actual demo used accelerometer data from the iPhone accelerometer instead of a WunderBar module. But it is still all implemented. The problem lay in the connection of iOS to the PubNub cloud, which operates for some services in the back of the relayr cloud. That’s why we could bring the Kinect data up to the cloud, but then not down to the iPhone.

The result of our hack is still pretty cool, and won all of us the second price – a Playstation 4, yay!
PureData acts as the core for sound synthesis, sampling and sequence generation. It is wrapped in an iOS application with libpd. In Xcode, the acceleration data is processed: First, the values are integrated to yield the velocity of the washing machine. Then, the zero crossings are used as triggers for PureData. The triggers are time-quantized and aligned to a sampled beat.

All in all, we hacked 24 hours straight and it was great fun! Big thanks to relayr!

WunderSound Xcode app: libpd and relayr cloud

WunderSound Xcode app: libpd and relayr cloud

  Posted by julian.vogels on September 28, 2014  /  Tags: , , , , , , ,

Music Hack Day Berlin 2014


On September 5-6, 2014, Ben Bacon and I participated in the first official Music Hack Day of Berlin! We partnered up with our awesome friends at ROLI to hack their magnificent new instrument: The Seaboard Grand. Needless to say, it …

  Show More

On September 5-6, 2014, Ben Bacon and I participated in the first official Music Hack Day of Berlin!

We partnered up with our awesome friends at ROLI to hack their magnificent new instrument: The Seaboard Grand. Needless to say, it was an amazing experience experimenting with different mappings and sound synthesis methods. You can learn a little more about the instrument by watching the short video below:

At MHD Berlin we came up with a few different input/output mappings. These where were all largely inspired by the Seaboard’s ability to provide the player continuous control of the instrument’s sound. Because the Seaboard’s silicone/rubber playing surface is highly manipulatable, we employed the use of gestural metaphors (squeezing, pushing, and even bowing) to guide the intuitive process of our mapping strategies.

Mapping 1

The first mapping approach was the most experimental, and repurposed the keyboard to work more like a bowed string. By working with the visual programming IDE Max/MSP, we hooked-up the MIDI input (note value, velocity and note on/off), aftertouch signal (finger pressure on the keys) and 14 bit pitch bend (awesome!) to the physical modeling engine PerColate. Originally designed as the Synthesis Toolkit (STK) by Perry Cook and Gary Scavone, PerColate was ported to Max/MSP by Dan Trueman, and has since become one of the most popular physical-modeling engines around. PerColate gives the user the ability to directly manipulate the model’s physical parameters.

Given that we had a virtual universe of realistic and, uh, imaginative (blotar~ !) engines to choose from, we decided to go with the model of a bowed string. In this case, we wanted to make use of the expansive control surface of the ROLI Seaboard. While there exists the general form of a piano’s keyboard within the contour of the performance space, the areas above and below the keyboard can be used to manipulate sound of the instrument as well. We wanted to take advantage of this unique feature by developing a mapping that allowed the user to define a “string length” (i.e. pitch) by choosing a distance and pressing down on two points on the area below the keyboard.

The amount of pressure performed on the keyboard was mapped to the bow pressure parameter, while the average position on the Seaboard was mapped to the bow position parameter on the string. A note can be played by swiping one of the hands (bowing) in either direction on the flat surface after choosing a string length. The change in position influences the pitch slightly, along with the vibrato frequency. In addition, the exponential moving deviation of the position change is calculated (with help from the Digital Orchestra Toolbox), which serves as an excellent variable for mapping of sound intensity. The faster you swipe, the louder the virtual string is bowed: This give you the feeling of actually bowing the SeaBoard.



Mapping 2

For the second approach we developed a more conventional mapping for a keyboard interface. By employing Max/MSP once more, we implemented the scansynth~  external developed by Jean-Michel Couturier. This amazing synthesizer creates sound by using the continuous readings of an ever changing wavetable. This approach is called Scanned Synthesis. A virtual mass-spring damper system can then be manipulated by forces that are controlled by the input parameters of the Seaboard. Overall, the system is represented as a warbling virtual string.

Finger pressure on the key was mapped to act as a force on the string. By dragging down on the key, the pitch is bent. The phase of the pitch bend acts as a shifting force,  which when increased is perceived as a sharpening of the timbre. Furthermore, we implemented polyphony, and the synth is fully suitable for live performance. We christened it ScannedSea.


Mapping 3

The final mapping developed for MHD Berlin using the ROLI, was an interactive microtonal patch inspired by the gelatinous form of the ROLI keyboard. Envisioning the Seaboard as just a MIDI keyboard is surely an understatement. Yet, MIDI is somewhat of a ridged protocol. Developed long long ago (sorry people born before the 80’s!) in a galaxy far away, MIDI at its core adheres to the 12-tone western scale, despite the presence of the pitch bending wheel.

The ROLI demands a dynamic performance environment. Therefore, adhering strictly to the 12-tone scale would does not suffice! To allow notes, scales, and tonalities of all frequencies to be considered equally, a patch was developed using the Native Instruments Massive synthesizer. In this mapping scheme, the user has the ability to “detune” the playing surface. By pressing the “R” atop the Seaboard, and then swiping up the length of the entire keyboard with varying pressure, each key is either detuned-up or down. The resultant scale provides the performer with a customized microtonal environment in which to play.

The challenges of DMI Mapping

Input/output instrument mapping lies at the crux of what makes an instrument successful. What do people hear when a specific action is perfumed? Are rhythm and timbre integrated or separated from gestural movement? Is there a disconnect between what we see and what we hear? These questions have often proven themselves to be quite difficult when a new interface is presented. The ROLI is a great example of, as Perry Cook would say, “leveraging expert technique.” By designing around the familiar shape of a piano keyboard, millions of performers are instantly aware of the Seaboard’s performance capabilities. The unique alterations of sensitive force sensors under the rubber/silicone padding offer just enough changes to retain the familiarity of the piano, while instantly tapping into the continuous control capabilities through the flexible playing surface.

Both of us can say with certainty that this instrument was a blast to work with, and hope to encounter one in our playing and programming careers soon!

Patches of our work from MHD Berlin can be found below, as well as powerful mapping tools from the Input Devices and Music Interaction Laboratory at McGill University.

ScannedSea from Julian vogels on Vimeo.

Download patches

Scanned Synthesis Original Paper (Bill Verplank & Max Mathews)

scansynth~ by Jean-Michel Couturier

Physical Modelling with Synthesis Toolkit (STK) & PerColate

dot.emd (exponential moving deviation) Max object by Digital Orchestra Toolbox (IDMIL)

libmapper (OSC network mapping)

This text was written collaboratively with Benjamin Bacon.

  Posted by julian.vogels on September 6, 2014  /  Tags: , , , , , ,

Extending PWM output pins with a Texas Instruments TLC5940 LED driver


Are you short on available PWM outputs on your Arduino or other microcontroller? Read through this tutorial, get a TLC5940 and you can extend your controller by any number of signals!
I wrote this tutorial originally for sensorwiki.org.

  Show More

This tutorial was originally written for sensorwiki.org, a fantastic resource for the design of gestural controllers that is supported by IDMIL.


Microcontrollers like the Arduino were designed to facilitate the use of electronics for designers and DIY enthusiasts. The interface provides a great starting points for a variety of elecronic circuit designs. However, as the microcontroller is standardized, it is also limited in its use. That shows for example in the limited number of PWM (pulse width modulation) enabled output pins.

What can you do to extend the PWM capabilities of your Arduino? Just buy a bigger one? That is not necessary anymore after you have read this article. Here it is shown how to connect an Arduino microcontroller to a Texas Instruments TLC5940 LED Driver to connect a large number of LEDs, or even power-intensive devices such as star-mounted high power RGB LEDs or servo motors.

In the design of digital musical instruments (DMIs), this is particularily useful to provide different kinds of feedback to the performer while maintining high extensibility at a lower cost.

Disclaimer: This information and the circuits are provided as is without any express or implied warranties. While every effort has been taken to ensure the accuracy of the information contained in this text, the authors/maintainers/contributors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.


The datasheet of the TLC5940 is available from Texas Instruments, amongst other useful information such as application notes and the option to request samples.

A selection of important features:

Number of channels: 16

Resolution: 12 bit (4096 steps)

Drive Capability: 0 mA to 120 mA (for VCC > 3.6 V)

Connectable actuators

Many electrical components can be controlled using a PWM signal. Not only LEDs can be dimmed, but also Servo motors can be driven, as well as DC motors.

Daisy chaining

Daisy chaining means that you can wire multplie devices together in series. In our case, we can not only extend the PWM pins with one TLC5940 with 16 pins, but because of the daisy-chain ability even use multiple TLC5940s to output 32, 48 or 64 PWM signals.


The wiring of the TLC5940 will occupy 4 PWM pins on the Arduino for the serial communication to the chip. Depending on your Arduino, you will have to look up the according pins that have to be connected. The tlc5940arduino Arduino library provides additional information (e.g., connecting an Arduino Mega). Please read the comments to each scheme, as there are a few mistakes in the showed pin setup.


Generally, the MOSI pin is connected to the TLC SIN, SCK to SCLK on the TLC, OC1A to XLAT, OC1B to BLANK and OC2B to GSCLK. In addition, the TLC’s DCPRG should be connected to VCC (rather than to GND) to disable the on-chip EEPROM dot-correction and enable the dot-correction from the DC-register in the device that can later be used with the Arduino library. The VPRG pin on the TLC can be connected to ground when the standard “greyscale” PWM register should be used, as opposed to connecting this pin to the Arduino’s digital pin 8 to use the dot-correction functions in the library. This pin is optional and you can leave it on GND for now. The TLC’s XERR pin can be used to check for thermal overloads if you connect it to digital pin 12 on the Arduino.

In order to daisy chain two or more TLC5940, connect the SOUT of TLC 1 to the SIN of TLC 2, and the SCLK, XLAT, BLANK and GSCLK and proceed in that manner for every additional TLC5940.

A 10k pull-up resistor connects the TLC BLANK to GND. This is necessary in order to turn off all outputs while the Arduino resets, so that they do not ”float“ (it would output the voltage difference between two not ground-referenced signals – basically noise). It is only necessary to add this resistor to the first TLC5940 in the daisy chain, as the BLANK pins are connected.

The IREF pin of every TLC5940 has to be connected to VCC through a resistor. The resistor value has to be calculated according to the output current that is suitable for your application. If you want to connect components that draw 20 mA of current (such as standard LEDs), use Ohm’s law to receive the resistor value:

R = V / I
R = 39.06 V / 0.020 A
R = 1,953 ≅ 2k

For those who want to know where the number 39.06 comes from: As the output current of the TLC5940 is set by a current mirror by taking the reference current (that is determined by a resistor from an on-chip 1.24V voltage reference) and multiplying it with a nominal gain of 31.5 you get 1.24 x 31.5 ≅ 39.06!

Please study this breadboard layout for connecting 32 LEDs to your Arduino. Note that output pins 0 and 15 of the TLC5940 are on the opposite site of the other output pins.


Control circuit

An Arduino microcontroller is limited in its output current to 40 mA, while it should probably not be driven at maximum. Overall, you should not draw more than 200 mA current from the Arduino as that is the processor chip package current.

If you need to drive high power consuming devices, you should design a control circuit and a work circuit. The control circuit, which is driven with a low current, will tell the work current when to let current flow to your connected devices. This is accomplished through the use of transistors. For every output pin that you want to control separately, you’ll need a PNP transistor.

Note: Don’t pick an NPN transistor, as the TLC5940 is a constant-current sink and the current has to flow towards the output pins. As a PNP transistor’s base pin will connect the collector end emitter pins when current is drawn from is, as opposed to the behaviour of an NPN transistor, that switches when current is applied to the base pin.

Make sure that you get a PNP transistor that switches quickly and operates at the TLC5940’s output current of 20 mA. You can verify this by looking on the graphs in the transistors datasheet.
The work circuit is only connected with the Arduino through the transistors and operates at a higher current auch as 400 mA. If you would connect a star-mounted high power RGB LED such as a this model from Vollong, you will need to connect a resistor between the emmitter and the diode. The resistance value is calculated as follows:

R = (supply voltage – diode voltage) / (diode current)
R = (5 V – 2.5 V) / (0.4 A) = 6.25 Ohm

You should choose the resistor that is nearest to the off-rounded value of that number.

Choose the power supply according to the amount of Watt needed by the connected devices. You can calculate the power of each device by multiplying the voltage and the current and then sum the results.

TLC5940 Control Circuit

These simplified schematics show a single high power LED connected via a PNP transistor to the TLC5940.


Arduino Code

For the Arduino code, please refer to the well documented tlc5940arduino Arduino library written mostly by A. C. Leone. After putting the downloaded folder into the library folder inside the Arduino folder, example files will be available under the File > Example section in the Arduino IDE.

The Example file BasicUse.h will guide you through the most important library features. Basically, the TLC has to be initialized in the setup statement of the code (Tlc.init()). Then, you can set the value of each output pin in a for loop by accessing Tlc.set(channel, value) where channel is 0 to 15, and value is 0 to 4095. Tlc.update() is then used to actually send the set values to the TLC5940, whereas Tlc.clear() sets all values to zero without sending them.

An important thing to know is that if you want to use multiple TLC5940s, you have to set their quantity in the file ”tlc_config.h“ in the library’s folder. Open the file with your favorite text editor and replace the value of the constant NUM_TLCS with the amount of TLCs you’re using. Save the file and restart the Arduino IDE.

Servo motors have to be controlled differently than common LEDs. Fortunately, the tlc5940arduino library provides a way of doing so without having to change much code. The example file Servos.ino explains how you should connect a servo motor and shows the use of custom library functions such as tlc_initServos().

Be aware that you cannot use LEDs and servo motors with the same TLC5940 (either if daisy-chained), as the use of the latter fuction will drop down the PWM frequency to 50 Hz (which will be significant for the LEDs).


The circuit is useful for connecting any large number or actuators to your device. For example, you could imagine giving visual feedback to user interaction on different interaction locations. An array of individually PWM controlled LEDs that can even be faded gradually can be accomplished using the TLC5940.

An array of individually contollable servo motors could be used for many purposes, as servo motors are very accurate, quite fast adjusting their angle and versatile due to the available servo accesoirs such as horns and rods.

Additional Information

Sparkfun sells a TLC5940 Breakout board for a reasonably low price (currently $12.95 where the TLC5940 alone is at about $8).


This document provided an introduction to the TLC5940 LED driver, details on its capabilities and applications and practical information on its implementation and use with the Arduino library.

Fritzing files

Fritzing is an open-source software distribution for designing breadboard layouts and much more. You can download these zipped Fritzing .fzz files to get a better understanding of how to wire it up.

  TLC5940 Breadboard Layouts

The contained Frizing file TLC5940controlcircuit.fzz requires the  Adafruit Fritzing Object library as an exotic object is used (the high power LED).

  Posted by julian.vogels on June 17, 2013  /  Tags: , , , , ,

Boundary Conditions in a 2D Digital Waveguide Mesh

Final course project for MUMT618 – ”Computational Modeling of Musical Acoustics“ instructed by Gary Scavone at McGill University 2013. Introduction The 2D Digital Waveguide Mesh is an extension of the one dimensional digital waveguide into 2D and was first described …

  Show More

Final course project for MUMT618 – ”Computational Modeling of Musical Acoustics“ instructed by Gary Scavone at McGill University 2013.


The 2D Digital Waveguide Mesh is an extension of the one dimensional digital waveguide into 2D and was first described by Van Duyne and Smith (1993). In its original rectilinear form, the lattice is obtained by superimposing waveguides perpendicularly with a unit delay spacing. This results in a grid of 4-port scattering junctions connected with one sample delays.

It has been proven that this method is a viable finite difference approximation to the two dimensional wave equation and therefore suitable for modeling membranes.

The 2D Digital Waveguide Mesh suffers from dispersion errors and doesn’t inherit the 1D waveguide’s computational efficiency as every node in the mesh has to be computed.

When exciting the system at one junction with a given input energy, the energy will spread throughout the system, building up a wave. In a constrained 2D Digital Waveguide Mesh, this wave will eventually hit the boundaries and either get reflected, absorbed, or partially reflected, according to the boundary conditions.

Boundary Modelling

Typically, the boundary nodes of a rectangular 2D Digital Waveguide Mesh are terminated as a 1D digital waveguide. At the boundary, an interaction between the terminating junction that represents the boundary and its immediate neighbour on the mesh itself takes place. As you can see in the graphic, in a simple rectangular model of boundary conditions, corners are not connected and boundary nodes are perpendicular to its neighbouring nodes. Also, boundary nodes are only interacting with one single neighbouring node.

The basic boundary condition after Kelloniemi (2004):

where p represents pressure, subscript B denotes the border node, subscript 1 represents its perpendicular neighbour and r is the reflection coefficient.

Simple interaction can be modelled by setting the reflection coefficient: With r = 0, anechoic conditions at the boundary are simulated, whereas with r = 1, a phase preserving total reflection without absorpition is simulated. A phase reversing reflection is modelled by setting r = -1 (Murphy and Mullen 2002). This however is an oversimplification and doesn’t model the real interaction between the wavefront and the boundary, as on a real instrument (a plate, sound bar, drum) the connected material and shape of the membrane play a role.

In order to refine the interaction, a filter can be added to obtain a frequency-dependend reflection. This leads to the intention of this work.

Goals of this project

This project is dedicated to find interesting approaches to the modelling of boundary filters and to implement additions into the STK environment. Furthermore, a filter design to model a semantically more wooden sound is to be found, as opposed to the inherent metallic sound when considering a total reflection of the wave.

Using a one pole filter

A single pole filter has a pole at a real number on the z-plane, constrained by -1 and 1 for stability of the filter. The position of the pole, which is used to model the boundary of the 2D Digital Waveguide Mesh has a great effect on the resulting sound.

Pole at 0.0001

If the pole is placed near zero (the center of  the unit circle), the filter structure approaches b0/1, essentially modelling a simple total reflection.

Pole at 0.9999

A pole near the unit circle results in a high absorption of the traveling waves.

Frequency response of a sequence of generated sounds of the waveguide mesh

This video shows the magnitude plot of a sequence of 20 sounds that were generated using the Synthesis Tool Kit Instrument ”2DMesh“, which by default already implements a OnePole Filter. The pole positions of the one pole boundary filters of the waveguide meshes change with every new energy initiation from -0.95 to 0.95, adding 0.05 to the pole position on the real axis with every iteration. You can verify the effect of the pole position on the frequency response of the filter by using this java applet on earlevel.com. Basically, if the pole position is below zero, the filter has a high-pass characteristic, whereas for values above zero, it turns into a low-pass filter.

Mesh2D OnePole pole position magnitude plot (video – download as .zip)

The following sequence of plots represent the 200th frame of a 2D Digital Waveguide Mesh simulation in Matlab®, where the pole position in the first picture is at 0.05 and is incremented for every following picture about 0.10, ending with a pole position of 0.95.

Using a OneZero Filter

In order to alter the sound differently, a one-zero filter was implemented in the STK Mesh2D instrument. The one-zero filter is simply summing the current sample with the past sample, therefore providing simple low-pass characteristics.  After building the library, the zero position could be set using a function called setZero as opposed to setPole for the OnePole version. The usage is basically the same. As the OneZero filter has low-pass characteristics, the results were similar to the OnePole implementation and yielded no interesting sounds. Try listening to this audio file:

Audio file

Using a Multiband FIR Filter

The acoustic sound absorption coefficients for wood can be found online. I tried to design a multiband FIR filter using these coefficients as a base and scaling them linearly in order to maintain more amplitude of the signal. The resulting sound is pretty interesting, but also much attenuated. Besides the high frequency hiss and the short decay, in my opinion it sounds like a hard knock on wood. The filter was designed with the use of the fdatool in Matlab.

Audio file

Using a Yule-Walker Filter

Another approach to achieve multiband filtering is to use a Yule-Walker filter (yulewalk in Matlab) that approximates the filter’s frequency spectrum with the Yule-Walker autoregressive method by minimizing the forward-prediction error . This filter implementation yields no better results and inflicts more high frequency hiss.

This filter and the latter filter are of order twenty and require significant time to compute, but the Yule-Walker filter is by far less computational expensive.

Audio file

Comparison of spectrograms

OnePole Boundary Filter

OneZero Boundary Filter

Multiband FIR Boundary Filter

Yule-Walker Boundary Filter

Varying the input position

By varying the input position of the initial impulse to the mesh, the resulting sound can be altered. In the plot below we can see that with less distance of the input position to the output position of the mesh, where a node value is written to the audio file, the energy contained in the mesh increases. The energy is measured after a certain amount of samples, corresponding to one quarter of the iteration time in the C++ implementation.

Two different changes in distance were tested. The input position was consecutively altered over a diagonal line across the square mesh (blue lines) and over a horizontal line, where the y-value was parallel to the output position at any time. When coming too close to the output position, the in the mesh becomes too high, so that the audio signal gets clipped. This can be adjusted with the amount of energy that is input into the mesh via the noteOn function.

Diagonally changing input position

horizontally changing input position

The input position is changed according to this top view of the mesh in 0.05 steps. The output is taken from the right bottom corner.  The input position’s x-value is altered in 0.05 steps. The input position’s y value stays the same.

Altering the output position

Varying the output position also has an effect on the spectral content of the generated sound. In the spectrogram plots below you can see that for different output positions, the spectral content especially in the high frequency range changes.

Diagonally changing output position

horizontally changing output position

The output position is changed according to this top view of the mesh in 0.05 steps. The input position is at the center of the mesh. The input position is at the center of the mesh.


Different tests concerning the boundary filters of the 2D Digital Waveguide Mesh were shown. Four different filters were implemented. The input and output positions on the Waveguide Mesh were changed over time using the STK instrument Mesh2D.

By choosing an appropriate filter at the boundary of a 2D Digital Waveguide Mesh, the sound resulting of an excitation of the system can be altered in its amplitude and spectral content, corresponding to perceived loudness and timbre. Likewise, the change in input and output position has an influence on the frequency characteristics and amplitude of the sound.

However, the effect is not very interesting, as no filter configuration could be found that could result in a quite different sound, such as transforming the metallic timbre into a more wood-like timbre. I therefore conclude that the modelling of an appropriate sound body is more important to the alteration of the sound than a boundary filter as proposed by, for example,  Aird 2002 and Laird 2001.

Epilogue: Future improvements of the Mesh2D STK Instrument

The Mesh2D STK Instrument is a basic implementation of the 2D Digital Waveguide Mesh, giving the option to specify the x- and y-dimensions of a rectangular mesh, the decay factor and the input position on the mesh. The system can then be excited with a desired amplitude through a noteOn command.

Digital waveguide meshes can be far more complex than this. First of all, there are different mesh geometries, such as a hexagonal or triangular node pattern. The latter showed less dispersion error and less computed nodes per space unit (Fontana 1995). The STK implementation could also be extended to be able to model circular meshes, using rimguides to connect the mesh to a virtual boundary (Laird 1998). Laird then introduced the modelling of diffuse boundaries to simulate rough materials by randomly varying the incident traveling wave’s angle (1999). The accuracy of this modelling approach can be greatly improved through multidimensional deinterpolation and frequency warping, consequently almost diminishing the dispersion error at the expense of computational efficiency (Laird 2000).

These improvements refer to the use of the Mesh2D STK Instrument as a means of modelling actual instruments. However, the waveguide mesh technique can also be used to model the human vocal tract, or the acoustics of a room, especially in its extension into 3D in an either rectilinear or tetrahedral setting.

Matlab and C++ code

This archive contains both the Matlab code and the C++ code with instructions on how to use it with the STK. Please refer to the README file.


Aird, Marc-Laurent. 2002. Musical Instrument Modelling Using Digital Waveguides. Diss. PhD thesis, University of Bath, 2002.
Doctoral dissertation about the modelling of different instruments including the construction of a drum model connection 2D- and 3D-Waveguide Meshes.
Fontana, Federico, and Davide Rocchesso. 1995. A New Formulation of the 2D-Waveguide Mesh for Percussion Instruments. Paper read at Proceedings of XI Colloquium on Musical Informatics.
Introduction of the triangular mesh geometry, reducing the dispersion error.
Huopaniemi, Jyri, Lauri Savioja, and Matti Karjalainen. 1997. Modeling of reflections and air absorption in acoustical spaces a digital filter design approach. Paper read at IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.
Digital filter modelling of reflection and air absorption for room acoustics.
Kelloniemi, Antti, Damian T. Murphy, Lauri Savioja, and V. Välimäki. 2004. Boundary Conditions in a Multi-Dimensional Digital Waveguide Mesh. Paper read at IEEE International Conference on Acoustics, Speech, and Signal Processing.
Modelling of artificial boundary conditions using a Taylor series approximation in an interpolated mesh.
Laird, Joel, Paul Masri, and Nishan Canagarajah. 1998. Efficient and Accurate Synthesis of Circular Membranes Using Digital Waveguides. Paper read at IEE Colloquium on Audio and Music Technology: The Challenge of Creative DSP.
Exploring several rimguide strategies for the modelling of circular membranes, suggesting the adjustment of phase delays.
Laird, Joel, Paul Masri, and Nishan Canagarajah. 1999. Modelling Diffusion at Boundary of a Digital Waveguide Mesh. Paper read at International Computer Music Conference, at Beijing, China.
An approach to model diffusion at the boundaries of a mesh with curved boundaries by randomly varying the incident angle of incident traveling waves, e.g, for modelling rough surfaces.
Laird, Joel. The physical modelling of drums using digital waveguides. Diss. University of Bristol, 2001.
Doctoral dissertation about the modelling of drums, involving very detailed models and connected 2D and 3D Waveguide Meshes.
Lee, Kyogu, and Julius O. Smith. 2004. Implementation of a Highly Diffusing 2-D Digital Waveguide Mesh with a Quadratic Residue Diffuser. Paper read at International Computer Music Conference, at Miami, Florida.
A more recent paper dealing with the modelling of diffusion at the boundary of a 2D Waveguide Mesh using quadratic residue sequences.
Murphy, Damian T., Antti Kelloniemi, Jack Mullen, and Simon Shelley. 2007. Acoustic Modeling Using the Digital Waveguide Mesh. IEEE Signal Processing Magazine.
Magazine article summarizing developments in the field of Digital Waveguide Meshes.
Murphy, Damian T., and Jack Mullen. 2002. Digital Waveguide Mesh Modelling of Room Acoustics: Improved Anechoic Boundaries. Paper read at Conference on Digital Audio Effects (DAFX-02), at Hamburg, Germany.
This paper gives an overview of known boundary types and describes a new approach to the specific case of an anechoic boundary.
Rossing, Thomas D. 1982. The Physics of Kettledrums. Scientific American, 172-78.
A introductory magazine article about the physics of kettle drums in particular, but also drums in general.
Savioja, Lauri. 1999. Modeling Techniques for Virtual Acoustics. Helsinki: Aalto University.
A research report with a 19 page chapter on Digital Waveguide Meshes.
Savioja, Lauri, and V. Välimäki. 2000. Reducing the Dispersion Error in the Digital Waveguide Mesh Using Interpolation and Frequency-Warping Techniques. IEEE Transactions on Speech and Audio Processing 8 (2).
An attempt to reduce the dispersion error of the Waveguide Mesh using a methot that involves multidimensional interpolation, optimization of the point-spreading function, and frequency warping.
Smith, Julius O. 2004. Virtual Acoustic Musical Instruments: Review and Update. Journal of New Music Research 33 (3):283-304.
A journal article reviewing the developments in the field of virtual acoustics with a short but concise section about percussion modelling, referencing many important publications.
Strikwerda, J.C. 2004. Finite Difference Schemes and Partial Differential Equations: Society for Industrial and Applied Mathematics (SIAM, 3600 Market Street, Floor 6, Philadelphia, PA 19104).
Basic theory of finite difference schemes and an explanation of the Von Neumann Analysis used to determine the dispersion error of a 2D Digital Waveguide Mesh geometry.
Van Duyne, Scott A., and Julius O. Smith. 1993. Physical Modeling with the 2-D Digital Waveguide Mesh. Paper read at International Computer Music Conference.
The original paper, introducing the rectilinear 2D Digital Waveguide Mesh.
Van Duyne, Scott A., and Julius O. Smith. 1995. The Tetrahedral Digital Waveguide Mesh. Paper read at IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.
The extension of the 2D Digital Waveguide Mesh into 3D using a tetrahedral approach with four port scattering junction for a better computational efficiency.

Suggested readings

Vocal Tract Modeling
Mullen, Jack. 2006. Physical Modelling of the Vocal Tract with the 2D Digital Waveguide Mesh, Department of electronics, University of York, York, United Kingdom.
Mullen, Jack, David M. Howard, and Damian T. Murphy. 2004. Acoustical Simulations of the Human Vocal Tract Using the 1D and 2D Digital Waveguide Software Model. Paper read at Conference on Digital Audio Effects (DAFX’04), at Naples, Italy.
Fitting a 2D Digital Waveguide Mesh into an arbitrary geometrical shape
Lee, Jung Suk, Gary P. Scavone, Philippe Depalle, and Moonseok Kim. 2011. Conformal Method for the Rectilinear Digital Waveguide Mesh. Paper read at IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, at New Paltz, NY.
Room Acoustics
Murphy, Damian T., and Mark Beeson. 2007. The KW-Boundary Hybrid Digital Waveguide Mesh for Room Acoustics Applications. IEEE Transactions on Audio, Speech, and Language Processing 15 (2):552-64.
Murphy, Damian T., and David M. Howard. 2000. 2-D Digital Waveguide Mesh Topologies in Room Acoustics Modeling. Paper read at Conference on Digital Audio Effects (DAFX-00), at Verona, Italy.
  Posted by julian.vogels on April 10, 2013

General Formats in MIR research: ACE XML

For Ichiro Fujinaga‘s Music Technology class “MUMT 621 – Music Information Acquisition, Preservation, and Retrieval” I prepared this Keynote presentation about general file formats in music information retrieval (MIR) research, focussing mainly on the Weka ARFF file format and the …

  Show More

For Ichiro Fujinaga‘s Music Technology class “MUMT 621 – Music Information Acquisition, Preservation, and Retrieval” I prepared this Keynote presentation about general file formats in music information retrieval (MIR) research, focussing mainly on the Weka ARFF file format and the ACE XML file format, comparing them and explaining the file structure. You can download it in .key format or as a PDF file (without animations).



Annotated Bibliography

McKay, C. 2013a. ACE XML. http://jmir.sourceforge.net/index_ACE_XML.html (ac- cessed on 24 March 2013)
The main web resource for finding information about ACE XML: The official code base and documentation of the jMIR project on sourceforge, maintained by Cory McKay. Not only the specifications and current development is found there, but also an introduction to the background of ACE XML, links to publications on the topic and the jMIR software, which comprises a comprehensive JavaDoc with example files.
McKay, C. 2010. Automatic music classification with jMIR. PhD diss. McGill University, Canada. 369-441.
Cory McKay’s P.h.D. dissertation is a very, very comprehensive document about jMIR. Over 70 pages are dedicated to ACE XML, and although some information overlaps with the homepage, this document is more complete.
McKay, C. 2013b. ACE XML 2.0 Specification. http://www.music.mcgill.ca/∼cmckay/NEMA/ACE_XML_Dev_Page.html (accessed 21 March 2013)
The specification gives details about the file structure of ACE XML as well as on the use of the different file types.
Frauenstein, A., and Reutemann, P. 2013. Weka – Arff (Stable Version). http://weka.wikispaces.com/ARFF+%28stable+version%29 (accessed 25 March 2013)
This website is a short but sufficient description of the ARFF file format, and the only one I found on the web. It describes the file structure, but not so much how the file ought to be used.
Rapid-I GmbH. 2013. Rapid I Operator Übersicht: Ein- und Ausgabe. http://rapid-i.com/content/view/12/7/lang,de/ (accessed 24 March 2013)
This website describes the features of the RapidMiner software for data analysis and the file formats used.
Ludwig-Maximilians-Universität München. 2012. ELKI: Environment for Developing KDD Applications Supported by Index Structures. http://elki.dbs.ifi.lmu.de/wiki/InputFormat (accessed 24 March 2013)
This website descibes the features of the ELKI software for data analysis and the file formats used.
KNIME.com AG. 2013. KNIME Features. http://www.knime.org/introduction/features (accessed 24 March 2013)
This website descibes the features of the KNIME software for data analysis and the file formats used.
Chen, Peter P.-S. 1976. The Entity-Relationship Model Toward a Unified View of Data. ACM Transactions on Database Systems 1 (1): 9-36.
The original paper on the entity-relationship model to describe databases. This model is widely used today and used by Cory McKay for ACE XML 2.0, included with this presentation.
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten I. H. 2009. The WEKA Data Mining Software: An Update. Special Interest Group on Knowledge Discovery and Data Mining Explorations 11 (1).
This is the official paper describing the Weka software for data mining.
  Posted by julian.vogels on March 26, 2013

Automatic Music Transcription: Anssi Klapuri’s fundamental frequency 
estimation algorithms

For Ichiro Fujinaga‘s Music Technology class “MUMT 621 – Music Information Acquisition, Preservation, and Retrieval” I prepared this Keynote presentation about automatic music transcription with emphasis on the work of Anssi Klapuri, especially on his 2006 paper “Multiple Fundamental Frequency …

  Show More

For Ichiro Fujinaga‘s Music Technology class “MUMT 621 – Music Information Acquisition, Preservation, and Retrieval” I prepared this Keynote presentation about automatic music transcription with emphasis on the work of Anssi Klapuri, especially on his 2006 paper “Multiple Fundamental Frequency Estimation by Summing Harmonic Amplitudes”. You can download it in .key format or as a PDF file (without animations).



Annotated Bibliography

Klapuri, Anssi P. 2006. Multiple Fundamental Frequency Estimation by Summing Harmonic Amplitudes. University of Victoria.
The main paper of the presentation: Estimation by calculating the salience.
Klapuri, Anssi P. 2004.Signal Processing Methods for the Automatic Transcription of Music. PhD Thesis, Tampere University of Technology, Tampere, Finland.
Anssi Klapuri’s PhD Thesis rewiewing multiple F0-estimation algorithms and presenting/reviewing his own algorithms (some of them were already published, some were new) along with musical meter estimation.
Klapuri, Anssi P. 2004.Automatic Music Transcription as We Know it Today. Journal of New Music Research 33 (3):269-282.
An overview over the field of automatic music transcription from 2004.
Raphael, Christopher. 2002. Automatic Transcription of Piano Music. IRCAM – Centre Pompidou, France.
A Hidden Markov Model approach to piano music transcription.
Poliner, Graham E., and Daniel P. W. Ellis. 2007. Improving Generalization For Polyphonic Piano Transcription. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. New Paltz, NY.
A Support Vector Machines approach to piano music transcription.
Moorer, James A. 1975. On the Segmentation and Analysis of Continuous Musical Sound by Digital Computer. Center for Computer Research in Music and Acoustics, Stanford University.
The original paper of the field, introducing several signal analysis methods such as extracting individual harmonics with bandpass filtering.
Smaragdis, Paris, and Judith C. Brown. 2003. Non-Negative Matrix Factorization for Polyphonic Music Transcription. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. New Paltz, NY.
A Non-negative matrix decomposition method to estimate the spectral profile and temporal information (not knowledge-based).
Marolt, Matija. 2001. A Connectionist Approach to Automatic Transcription of Polyphonic Piano Music. Faculty of Computer and Information Science, University of Ljubljana.
A Neural Networks approach to automatic transcription of piano music.
Ryynänen, Matti P., and Anssi P. Klapuri. 2005. Polyphonic Music Transcription using Note Event Modeling. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. New Paltz, NY.
A Hidden Markov Model approach to a real-world signal transcription, searching for different paths through note models.
Chang, Wei-Chen, Alvin W. Y. Su, Chunghsin Yeh, Axel Roebel, and Xavier Rodet. 2008. Multiple-F0 Tracking Based on a High-Order HMM Model. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. New Paltz, NY.
Another Hidden Markov Model approach to F0-tracking involving a rather complex tracking mechanism.
Benetos, Emmanouil, Simon Dixon, Dimitri Giannoulis, Holger Kirchhoff, and Anssi P. Klapuri. 2012. Automatic Music Transcription: Breaking the Glass Ceiling. Paper read at International Society for Music Information Retrieval Conference (ISMIR), at Porto, Portugal.
A recent paper on audio-to-midi transcription, reviewing the field and pledging for use-case tailored algorithms.
Paleari, Marco, Benoit Huet, Anthony Schutz, and Dirk Slock. 2008. A Multimodal Approach to Music Transcription. In 15th IEEE International Conference on Image Processing, 2008.
A multimodal approach to (e.g., guitar) transcription, that fuses the audio information and the visual data stream from a video to support traditional transcription techniques.
Martin, Keith D. 1996. A Blackboard System for Automatic Transcription of Simple Polyphonic Music.
This technical report describes the benefits and limitations of a blackboard architecture approach to automatic piano music transcription.
  Posted by julian.vogels on March 12, 2013

Neural Networks Presentation Slides

For Ichiro Fujinaga‘s Music Technology class “MUMT 621 – Music Information Acquisition, Preservation, and Retrieval” I prepared this Keynote presentation about (artificial) Neural Networks. You can download it in .key format or as a PDF file (without animations).     …

  Show More

For Ichiro Fujinaga‘s Music Technology class “MUMT 621 – Music Information Acquisition, Preservation, and Retrieval” I prepared this Keynote presentation about (artificial) Neural Networks. You can download it in .key format or as a PDF file (without animations).



Annotated Bibliography

Mariusz Bernacki and Przemysław Włodarczyk. Principles of training multi-layer neural network using backpropagation. Katedra Elektroniki AGH, 6 September 2004. Retrieved on 18 February 2013 from http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html
Good overview over the way that backpropagation of neural networks work. However, it is written in bad english.
David Kriesel. Ein kleiner Überblick über Neuronale Netze. Online Book. 2007. Retrieved on 18 February 2013 from http://www.dkriesel.com
A comprehensive introduction to neural networks, written with the contribution of many readers as a free-accessible online book (english/german).
Yazhong Feng, Yueting Zhuang and Yunhe Pan. Music Information Retrieval by Detecting Mood via Computational Media Aesthetics. In Proceedings of the IEEE/WIC International Conference on Web Intelligence, 2003.
Research paper on the classification of raw audio in four categories of moods. A feature called “relative tempo” is extracted and an “average silence ratio” is calculated. The mood detection is based on a simple BP neural network classifier.
Laura E. Gomez, Humberto Sossa, Ricardo Barron and Julio F. Jimenez. A new methodology for music retrieval based on dynamic neural networks. International Journal of Hybrid Intelligent Systems, Volume 9, Issue 1. 2012.
Research paper about a new approach to the automatic transcription of raw audio, using a dynamic neural network that is trained with melodies rather than traditional descriptors.
Warren S. Mulloch and Walter Pitts. A logical calculus of the ideas immanent in bervous activity. Bulletin of Mathematical Biophysics, Volume 5, 1943.
The first publication about artificial neural networks, claiming that “events and the relations among them can be treated by means of propositional logic”. Applications are discussed.
N. Scaringella, G. Zoia and D. Mlynek. Automatic genre classification of music content: a survey. IEEE Manuscript, 1 November 2005.
This manuscript reviews the state-of-the-art techniques for automatic genre classification using machine learning techniques such as neural networks.
Quoc V. Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen et al. Building High-level Features Using Large Scale Unsupervised Learning. In Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012.
This research paper written by Google employees describes the 2012 Google Experiment, where an unsupervised neural network was trained with one million YouTube thumbnail pictures and “invents” the concept of a cat.
  Posted by julian.vogels on February 19, 2013


Julian Vogels
Chief Technology Officer (CTO) at Soundbrenner Limited
Former member of the Input Devices and Music Interaction Lab (IDMIL)
Former CIRMMT Student Member

Berlin | Montréal | Hong Kong | Shenzhen

Your Name (required)

Your Email (required)


Your Message

Are you human?