Algorithmic recommendations are, fundamentally, editorial choices. Editorial sensitivity is key in shaping a broadcaster’s output, ensuring that content is both relevant to its audiences, and reflective of its institutional values. However, it is difficult to distil these complex considerations into a purely automated system. How do we ensure that the same guidelines and domain knowledge are applied at scale in recommender systems? This talk presents a subjective evaluation framework used by BBC for embedding editorial guidelines in algorithmic recommendations. The approach was developed within BBC Datalab, with the aim to balance compliance with editorial values and a simple mechanism for iterative editorial review.