Mike Linksvayer of Creative Commons yesterday steered my attention toward BMAT, a newly launched music recommendation engine which uses the Magnatune catalog as its demo.
What's particularly interesting about it is how the recommendations cross genres, but remain more-or-less consistent to mood.
For example, here's their recommendation page for Utopia Banished, a metal band at magnatune:
lots of nice variety in there, and interesting choices.
This is an outgrowth of SIMAC, a project at the Universitat Pompeu Fabra in Barcelona. I first heard from them some time ago, I think it was two years ago, when they asked to use our entire catalog as a seeding engine, and also for demonstration. I then saw a demo back in April that looked promising. Now, they've launched it as a company.
Their current demo points to 30 second versions of our songs as downloads, and quite nicely has a magnatune logo next to each song that goes to the artist page for that song, thus providing a good, strong attribution.
The 2-D map feature is interesting, and shows how similar the genres are when doing a punk similarity search (it shows where a particular set of recommendations lie in a similarity map):
when some new age and world music is recommended as being like Duo Chambure, the engine sees that these are at the outskirts of music similarity:
I was thinking about using this technology to generate "similar mood" playlists, and may do that in the future.
-john
Is all the data static? How was it generated? How many opinions was it based upon? How "accurate" is this information?
Posted by: Charlie | March 18, 2006 at 03:26 PM
Charlie wrote:
> Is all the data static? How was it generated? How many opinions was it based upon? How "accurate" is this information?
Yes the demo at http://demos.bmat.com/songsphere is actually a static, precalculated version, but the recommendation and similarity engine is able to calculate similarites in fractions of realtime thus allowing also "query by example".
The recommendations are calculated directly from the audio, that means automatic extraction of features and descriptions and comparison of these. There is no need of human interaction or opinions at all, it even does not need metadata as genre or artist names.
Therefore SongSphere gives accurate in the meaning of objective recommandations.
The demo version is based on a quite small collection of about 2300 songs but the engine is able to index music collections of a million of songs and do searches into databases of this size in a fraction of seconds.
Posted by: Koppi | March 19, 2006 at 03:34 AM
That makes sense then. The "similarities" didn't seem to be subjective. Tracks that one would not normally be thought of as being similar were popping up ("This song sounds nothing like this other song!") and it's due to a mathematical sonic analysis rather than a human opinion.
So if the analysis is not subjective is there any real purpose to this exercise then? I couldn't really agree with the comparison model. Do we have to let computers do everything?
Posted by: Charlie | March 19, 2006 at 10:11 AM
"I was thinking about using this technology to generate "similar mood" playlists, and may do that in the future."
Sounds like a good idea. It might help introduce people to music they might not have heard otherwise, and help avoid playlists that vary greatly in mood despite all the songs being from a single genre.
Still, all sonic analysis and no human opinion could lead to playlists of songs that don't fit together. I'd love to see a recommendation/categorisation/analysis system that combines both human and algorithmic input.
Perhaps you could use the BMAT system to generate some playlists, then tweak them by removing songs that don't fit. That's assuming you're thinking of playlists akin to what you already have for genres. Another option is to add a link to each artist/album page like "listen to music with a similar feel/sound", in which case, you'd be relying only on the BMAT recommendations.
Posted by: Nathan Jones | March 19, 2006 at 04:09 PM
>>Tracks that one would not normally be thought of as being similar were popping up ("This song sounds nothing like this other song!")
There is some percentage of songs giving this result, but I must disagree with you in general. Normally, most of the results are consistent (be sure to note that the similarity criteria used in the demo are very simple - see below). Also you have to look at the similarity rate. It is relative. If you have a small collection of songs, then among the first 10 results there could be some results with pretty low similarity rate.
One other thing to be mentionned, the similarity criteria used in the demo are pretty simple: for example Timbre. Some of the songs similar only in Tibre don't have to sound much alike if you do an overall comparison.. Now, the technology allows to combine the criteria and to adjust to personal tastes.. E.g. making it subjective depends on the similarity criteria "mix".
Regarding the usefulness:
If you have it running on a big database it would actually help you discover music.
The way you normally browse through a collection is, "Ok, I know this artist, and I have heard something about that one". That is more or less it. You are limited to the "i already know this" circle. The new music is unreacheable to you.. let's admit it, very few people would take their time to look through several thousands songs of the same style to find something they like. If you're using a technology based on content analysis it doesn't care about how known is the artist and helps one find similar music, including unknown music, and music you'd never discover othervise..
>>Do we have to let computers do everything?
As long as we have enough time or patience to spend hours or days doing what they do in seconds, we don't.
Posted by: Vadim | March 19, 2006 at 04:18 PM