Week 9 – Case Study: EarSketch

May 17, 2021

What’s good everybody! This is Peter Li back in this again – Welcome to my penultimate blog post!

My main activity this week was to finish the final piece of research that I will include in this project, which is a case study of a website called EarSketch (earsketch.gatech.edu). I was interested in testing it because this is the one of the first sites/apps that introduced the idea of arranging electronic music simply by coding in Python (or JavaScript, but we all know which one is better). This to me is an important feature because it adds another piece of evidence to my point established in last week’s blog post, which is that AI music software is bringing more  and more non-musicians to making music. Also, EarSketch allows me to produce some audible music, which adds a new layer to my project 🙂

EarSketch consists primarily of two parts: the digital audio workstation (DAW) and the code editor. Traditional DAWs (such as Pro Tools and Ableton Live) are the central application used to produce music electronically. The DAW in EarSketch on the other hand is like a PDF version of those — it displays and can play the music arrangement, but the arrangement cannot be edited. This is because the arranging is actually done by code editor, which is just like your average command prompt. Below is a screenshot of the layout of EarSketch (the code represents a demo song I made)!

Here’s a quick breakdown of parts of the code: each line (except comments) starts with a command, such as

  • setTempo (sets the overall speed of the song)
  • fitMedia (assigns a sound to an audio track)
  • fillA (provides a MIDI pattern that can be assigned to drum tracks)

For the fitMedia() lines, the first number represents which audio track the assigned sound is for, the second refers to the start bar, and the third refers to the end bar. For the fill() lines, the “0” represents a drum play, and the “-” represents a rest.

I think EarSketch contributes to my point because it provides a new way of making music that is relatively easy to use and thus has the potential to attract more people to learn producing music. As of right now EarSketch is still largely dependent on the user to compose music, which means in the near future it will serve to assist music makers instead of rendering them obsolete.

That’s it for this one – much love to you if you’ve read this far! For my next blog I will be summing up my findings from all these weeks and provide a comprehensive answer to my research question. Until then, peace out!

Here’s the demo song if you wanna check it out – keep in mind this is the first time I did something like this so it sounds pretty basic😬

Leave a Reply