An integrated solution comprising different technology layers to expertize every song.
We analyze core audio parameters such as beats per minute (bpm), key and tonality. We also provide clients with useful additional audio properties such danceability and overall energy of the song, which together empower a targeted search and playlisting system.
We use audio signal analysis and machine learning together to analyze audio content not only for key audio parameters but also for genres and moods. Thanks to a proprietary hand-curated knowledge-base of 1.6 million tracks that our system was trained on, we are now able to extract 30 moods describers as well as hundreds of emotion-related parameters.
Our algorithm takes into account a complementary technology layer based on a behavioral and statistical approach: in close collaboration with our clients, we use collaborating filtering to generate automatic predictions about their users’ interests and listening habits. This is achieved by monitoring users’ preferences and tastes (e.g., numbers of plays / skips, thumb up / down, etc.).
Each unit (e.g., artist, album, track) is analyzed by our algorithm, which compares information gathered through social aggregation to the unit’s manually expertized network of relationships (1.6 million songs dataset). In this process, the most relevant properties are weighted and then applied to the given unit so as to complete its metadata, which on average amounts to 100 weighted parameters. All of these parameters are in turn flagged with an expertise accuracy level.
Every day, our experts work on thousands of artists, albums and tracks: spotting faults, removing duplicate content and improving our accuracy is at the core of such process.