PbLocation: /b Remote from Portugal /ppbContract: /b Part-time, ***** hours/month (flexible around academic commitments) /ppbDuration: /b 6 months initially, with option to extend or transition to full-time /ppbr/ppbAbout Us /b /ppbr/ppCheckstep is a trust and safety infrastructure platform helping major digital platforms detect and respond to harmful content across text, image, and video at scale.
Frequency Land is our Portugal-based RD arm, and we're building a research programme around the open scientific questions behind next-generation content moderation.
/ppbr/ppbThe Problem /b /ppbr/ppAI moderation systems are brittle.
They overfit to surface form, fail on figurative or adversarial language, and can't adapt as communication evolves.
We're pursuing a core question: how can platforms consistently identify policy-relevant meaning in novel, figurative, adversarial, or culturally sensitive communication?
This sits at the intersection of computational linguistics, communication theory, and AI safety, and it has no satisfying answer yet.
/ppbr/ppbWhat You'll Work On /b /ppbr/ppOne or more research streams, shaped with your input: adaptive policy learning for non-literal language; bias and safety in moderation systems; communication risk in multimodal content; or policy change recommendation and explanation.
You'll produce a defined research question, experimental protocol, datasets or benchmarks where applicable, and a publishable technical report or paper.
/ppbr/ppbRequirements /b /ppbr/pulliActive PhD student or recent PhD in computational linguistics, NLP, AI safety, communication studies, cognitive science, or adjacent field /liliStrong publication record or demonstrable research output at top-tier venues /liliAbility to define a research question with genuine scientific uncertainty /liliFluency in written English /liliTax residence in Portugal, or right to work under a Portuguese employment contract /li /ulpbr/ppbNice to Have /b /ppbr/pulliInterest in figurative language, adversarial NLP, or computational pragmatics /liliFamiliarity with content moderation, trust and safety, or platform governance /liliExperience designing annotation schemes, benchmarks, or human-subject experiments /liliConnection to a Portuguese research institution or willingness to establish one /li /ul