Sound Generation using Artificial Intelligence came into limelight when Google announced Google Duplex project this year. Walking in the same direction Facebook AI Research (FAIR) has also used Artificial Intelligence to translate music from one instrument to another. Inspiration for this has come from humans ability to listen to a Music and then hum it or whistle it.
In order to implement this, developers from FAIR have used Google WaveNet which is a Neural Network designed to process sound and was also part of Google Duplex project. They have managed to give most famous music from composers like Mozart and get it translated to be played using a whistling effect or by another instrument all together. This means a music which used guitar can be translated to use a piano.
Some translated music from one instrument or form to another can be found in the below video.
If you are into the details of the internal working of this, you can read the research paper written by FAIR developers.
Starting Sharepoint Framework (SPFx) Development Environment in your PC. Read More
Quebic is a framework for writing serverless functions to run on Kubernetes. Read More
Google Map in Sharepoint SPFx with Angular. Read More