There are a large number of implicit biases within user data that are relevant when operating production ML systems at scale. If these are not accounted for, they can degrade or distort the results of an ML system, forming part of a feedback loop that increases the magnitude of the effect over time. Popularity biases in recommender systems, whereby popular items occur more frequently in training data and as a result are disproportionately recommended, is one such example of this. This talk describes one approach, Inverse Propensity Weighting (IPW), to mitigate these effects in BBC Sounds. This reweights incoming user data based on likelihood modelling; more popular items are assigned lower weights relative to less popular items. This increases both the personalisation and catalogue coverage of recommendations through increased novelty, and can be tuned to meet business needs. This is in the process of being implemented in the Sounds recommender system, with the intention of live user tests in the near future.