mSynth: The Realtime Collaborative Music Experience

mSynth: The Realtime Collaborative Music Experience


Hey guys, we are Team mSynth, we are
the winning team of the Outside Lands 24-hour music hack-a-thon. I’m Hanoi, I’m Robert, I’m Rodan, I’m Eric, Sam. So, our hack combines three core technologies. We have invented a new way for musicians and audience members to make
music collaborating together, using state-of-the-art technologies. We
combine artificial intelligence, a Reactive Native app, and also a mobile music
making platform. Our app is built on open source deep learning models from Google
Magenta, a React Native app and also PubNub to connect all the different
components together. So let’s introduce first part which is the neural
synthesizer. Recently, Google Magenta has open sourced some of their deep learning
models and the particular neural network that we’re using has been trained on a
multitude of different instrument sounds for the piano, to violins, to guitars. And
what it is trained to do is to produce completely new sounds unheard of musical
instruments based on what it’s heard before. Let’s take it for a little spin. So you see here, if we move closer to the violin sound, and I play something on the
keyboard, we’re going to get something that sounds like a violin. and then if we move a little closer to
the organ sound, We get something that sounds like an organ. And now, if we move somewhere in between a violin and an organ, the neural network will take
characteristics of the organ and the violin and create an entirely new sound
by mushing it together. Let’s see what it sounds like… All right! Next up we’re
going to show you how we connected this neural network to our mobile app. Alright, so we actually built the mobile app using React Native and this is actually what
the app looks like. It takes your accelerometer information it actually
sends it up to PubNub, and PubNub then averages it and then signals it down to
the neural network to actually generate this new sound. So as you can see here, I’m actually going to be moving it on the fly right here. Moving it more
towards the violin… And then I’m moving it more towards the organ… and the beauty of this is it creates a brand new experience for the artist to generate
new sounds on the fly. So say for example, me and Hanoi here, we’re
like a hip- hop DJ duo and we want to like make new sounds on the fly.
So we’re here in our studio and we’re actually just making new sounds. Sounds
that people have never heard of before. This can also translate very well into
the live performance aspect because as you can imagine, if I’m a audience who’s
very devoted to our DJ group, we’d be getting a new experience every single
time because I can just generate new sounds on the fly. So, in addition to making you know
just for a band member to actually help collaborate with the music making
process, we made it able for like a really completely new fan interactive
experience. So for instance, if we have we have we were just showing where there
was one person who was controlling where the neural network was in the
sound generation, but now imagine you know say 10,000 people were using this
app at Outside Lands and they all had the app open. So what we do is we’re
going to send multiple streams of data up, and then kind of average out where
the accelerometer data is pushing us on to the neural network,
and you could have a completely different fan interactive experience
where all the fans together are creating what the new sounds are that the artist is playing. So in like a live performance aspect
right, DJ Hanoi could be telling everybody… All right, I need everyone to
tilt their phones to the left and wait And tilt to the right! And tilt to the left… And tilt way top! We also added a another mode so for
example when the artist is performing usually he has some cool samples like… And so if the artist is performing… The audience is also able to trigger these
sounds too. Right, so we created like a God mode over here… You can imagine that this
could be a very well used by like a hype man for example was working in
conjunction with the artist. So if an artist didn’t want to give this functionality to the audience members, then yes, we can limit
that out and just give it to the audience or the band members. And lastly, because the interaction between the live artists performing on
stage and the audience is important, We also implemented a feature where if the
artist changes the sounds that they’re using, we can auto update the user
interface automatically to reflect these changes. So for example, here… Or… So you’ll see the buttons update without even me having to like close or reopen the app.
The artist can control if he updates the samples, he can update the text on the
buttons to match the new samples. So here we now press “magenta”… Cool! So that is a little demo, a little tour of our mSynth app. We will be featured during the three days
at Outside Lands. So if you are watching this video, definitely come and check out
our booth. We are team mSynth!

Leave a Reply

Your email address will not be published. Required fields are marked *