This is a 3 month contract role and is 100% remote, based in India.
About the RolePulse Labs is empowering insights and elevating experiences for the world's top technology companies. Backed by investors including Google and Amazon, we're revolutionizing product development delivering human-centered, actionable insights at every stage of the product lifecycle from inception to market release and beyond. We empower top tech companies to create and refine paradigm-shifting mobile, smart home, and AI products.
We are seeking a Data Scientist to contribute to a short-term project focused on the evaluation of large language models (LLMs). The role will involve building and managing data pipelines, running structured model evaluations, and designing challenging test cases that probe model robustness. This is a highly cross-disciplinary role, blending data engineering, NLP, and analytical thinking.
If you thrive in a fast-paced environment, love solving challenging problems, and want to work with a creative, collaborative team, you are a great fit! You'll work directly with our Director of Data Services to come up with solutions for a unique data processing project..
Key Responsibilities
- Build and maintain data pipelines for processing large-scale text datasets.
- Design and automate structured evaluation workflows for LLMs.
- Develop creative and challenging test scenarios to assess model reasoning and robustness.
- Curate and analyze outputs for quality, diversity, and compliance.
- Implement filters and safeguards to ensure non-sensitive and benign content.
- Document methodologies and contribute to best practices for model evaluation.
- Communicate clearly and effectively with team members across Engineering, Data Operations, and other departments.
- Ability to work independently, anticipate problems, and provide solutions with minimal supervision.
Qualifications
Must Have
- 3–6 years of experience as a Data Scientist / NLP Engineer.
- Strong programming skills in Python (pandas, regex, JSON handling, automation scripts).
- Experience working with LLM APIs and prompt-driven workflows.
- Familiarity with text analysis, data preprocessing, and evaluation metrics.
- Strong analytical and problem-solving abilities.
- Innate curiosity, an eagerness to learn, and a desire to create in a collaborative environment.
Preferred
- Exposure to AI evaluation, adversarial testing, or ML robustness studies.
- Background in cybersecurity, red teaming, or applied ML safety.
- Experience collaborating with QA or annotation teams.
- Strong written communication for reporting findings and documenting workflows.
- Ability to start within 2 weeks.