AURALITY

AURALITY is an augmented reality audio project exploring Queensland’s iconic rainforests, rivers and reefs through music, sound and acoustic ecology. The project connects multiple communities stretching the coastline of Queensland for QMF 2017. There will be active points in Brisbane, Noosa, Maryborough, Tin Can Bay, Townsville and Cairns. I am launching a new website on May 20 which will include further details for each location. The QMF site and program will link back to QCRC and the AURALITY site. The installation can be experienced by downloading a free mobile app for iOS and android that uses GPS points along the coast of Queensland to trigger audio based on location and movement. The free AURALITY app will launch on July 18 with 100 soundscapes activated across Queensland."

Discover more

Performative Affordances of Microelectronics

Performative affordances of microelectronics": This project seeks to establish a research environment to sxplore 'smart' electronic instruments for music perfomance. This environment includes the establishment of new collaborations within and beyond the institution, a novel and smart instrument building infrastructre based on microelectronics, and artistic research activities exploring current issues in the performative affordances of microelectronics.

Team members: John Ferguson, Andrew Brown.

Discover more

Doppelgänger Sweet

This performance is of a series of improvisations by Stephen Emmerson that involved playing with an interactive computer system via two Yamaha Disklaviers. It builds on the research of Professor Andrew Brown who supported by an Australian Research Council Discovery grant, has been exploring how creativity can be stimulated by such interactive tools.

Each of the ten pieces used a different Pure Data patch developed by Stephen Emmerson from templates designed by Lloyd Barrett. This film version of the performance, recorded in 2015, has been edited creatively by Paul Carasco (from Classical Film and Sound) and Stephen Emmerson to underline visually the fascinating relationship between the human and computer generated sounds. As the titles suggest, the pieces are intended to be playful explorations of the possibilities across various musical styles.

Team members: Stephen Emmerson, Andrew Brown

Discover more

Smart Music Research

This project developed psychology-inspired techniques for algorithmic generation of music. The project was funded by an Australian Research Council (ARC) Discovery grant. It included statistical analysis of musical score data to ascertain the validity of perceptual theories of implication, prediction, and expectation in musical structures.  This lead to sets of ‘rules’ for algorithmic composition. Analysis also revealed appropriate parameter values for those rules that seemed pertinent. Based on these rules we built algorithmic music systems that generated music based on composer-controlled parameters that effect note-level decisions. These generative models were tests with audiences through online surveys and critical review. The findings of the project were published in academic publications in the fields of computer music and music cognition. Creative works that utilised these techniques were performed and exhibited around the world.

Team members: Andrew Brown (Griffith University), Robert Davidson (University of Queensland), Toby Gifford (Griffith University), Eugene Narmour (Pennsylvania University), Geraint Wiggins (Goldsmiths, University of London), David Temperly (Eastman School of Music, Rochester)

Controlling Interactive Music Systems

A three year Australia Research Council (ARC) Discovery Project, this project investigates techniques for the control of computer music systems during performances with a musician. It will identify and evaluate parameters and patterns for expressive variation of music algorithms, contribute new theories of music representation, and provide insights into human interactions with semi-autonomous technologies.

Team members: Andrew Brown (Griffith University), Toby Gifford (Griffith University), Ian Whalley (Waikato, NZ), Michael Young (Goldsmiths, UK), Francois Pachet (Sony Computer Systems Laboratory, France)

The Bellmann Corpus

The Bellmann Corpus, released in 2013 was designed to provide an accurate and extensive collection of piano repertoire for the purposes of music research. It consists of musical scores for over 650 pieces (or complete sections of multi-movement works) for piano or harpsichord. There are 11 or 12 scores from each of the 60 composers ranging chronologically from Chambonnières (1602 – 1672) to Barber (1910 - 1981). The corpus provides a wide range of musical style samples. The collection resulted from the conversion of 2,220 pages of printed music, averaging 37 pages per composer to a digital format. For comparison purposes, it also includes 12 dodecaphonism pieces by Ernest Krenek.

Composers

View the complete list of composers and pieces used for the Bellmann Corpus.

About the creator

Héctor Bellmann has a background in physics, electronics and computer programming. He graduated with a Bachelor of Arts at the University of Queensland in 1996, completed his Master of Philosophy in Information Technology in 2006 at the Queensland University of Technology and received his PhD in Music at the Queensland Conservatorium, Griffith University in 2011. The Bellmann Corpus was developed during his research studies at Griffith, which examined the computational assessment of the features of musical style.

Corpus files

The Corpus has been extended in the years since this study was completed, and is provided here as a resource to assist future researchers. The scores are available in two machine-readable file formats:

  1. MusicXML, which can be opened by MuseScore or imported to Sibelius or Finale
  2. Coda notation files generated by Finale 2004 which can be opened by this or later versions.

Connect and collaborate

Explore the possibilities of researching, studying or collaborating with us