top of page
  • Writer's pictureLibero Mureddu

Goodbye Intuition LAB #3 in Oslo

Updated: Nov 12, 2018

Friday 14th September, I attended the Goodbye Intuition research project's LAB #3, open to the public. The presentation was held in the premises of the Norwegian Academy of Music (NMH).


The project's aim is defined at the beginning of the project's homepage:

With Goodbye Intuition we seek to challenge our roles and artistic preferences as improvising musicians by improvising with "creative" machines.

The 'creative' machine is named Kim-Auto. 'Roughly, we can sum up Kim-Auto’s planned architecture to consist of an archiving module, a listening/learning module and a generative module.' (Goodbye Intuition by Ivar Grydeland et al.).


In the LAB I attended, the live performers were Morten Qvenild and Andrea Neumann.


Observations

I found the presentation extremely interesting, as it is partly connected with my artistic doctorate. During the panel discussion that followed the performance many interesting questions and observations arose. Here are some thought, both personal and from the participants observations.

  • How important is to clarify to the audience what is played by the machine and what is played by humans?

This separation between the two roles can be done either visually, or sonically, if the machine for example plays back sounds that clearly don't belong to the human performer. For example, some of the machine-produced sounds were lo-fi versions of the acoustic incoming material, an interesting solution as it gave the impression of material extracted from the memory, where thoughts and actions can be blurred. Another interesting decision concerns the dynamic range of the machine material, potentially bigger than the one of the acoustic instrument and more generally reproduction balance between acoustic and electronic instruments. However, the question remains. Is it actually important to separate the roles and make sure that the audience understands immediately the rules of the game?

  • How much and what kind of intelligence should be given to the performing machine?

For me this is a crucial question. The intelligence of the machine will define inevitably the global aesthetics of the result. In order to add more diversity to the machine's output, should one add more randomness or more control?

  • Parent / sibling relationship between human performer and machine

This point is particularly interesting as it deals to a certain extend to the topic of research ethics when working with a machine. Is the machine a subordinate, an equal, or something else? How much of the machine behaviour should we as human control? How much should we adapt our playing to the machine?

  • Reaction of the performers when playing with the machine

The reactions seem to me to sit between two opposites, on one hand the performer can try to fully engage with the machine, to the point of trying to 'save' a musical situation when facing a non-cooperative partner. On the other, the performer can surrender from the idea of the interaction, without trying to influence the musical situation.

  • Use of machine-learning techniques to classify in real-time the human-generated material

Someone in the audience suggested this strategy, as it would free the performer by making detailed choices on the classification of the incoming material.

  • Relationship with the audience

The machine doesn't care whether there is audience or not. However, the performers inevitably do.

  • A paradox

An audience member suggested to mute the electronic part to the audience and listen only to the live player's part.

  • Improvisation as a method

I liked the concept of using improvisation as a method to investigate computer-generated material.


References

"Goodbye Intuition by Ivar Grydeland et al." Accessed 2018-09-16 11:46:03 https://www.researchcatalogue.net/view/411228/424771


Recent Posts

See All
bottom of page