We are researching how product and developer teams working with AI/ML driven recommendation systems apply technical and organisational measures to protect people from content related harm. We’re particularly interested in any measures put in place to protect children or other vulnerable communities. This could include measures to prevent material and non-material harm from advertising, inappropriate content or other adverse effects from profiling people and then delivering them content on the basis of this profiling.
We want to hear from designers, engineers and product team members about how they consider data protection in the design of AI/ML driven recommendation systems so that we can create more practical support. By recommendation systems we mean everything from ad targeting to ranking and personalisation systems. If you're interested in talking to our research team please reach out to [email protected] with RECSYS in the subject line.
Privacy statement
Your answers will be stored according to our privacy policy and will be shared within the ICO.