AI Quality Engineer
Populix
- Jakarta Barat
- Permanen
- kerja tetap
- Design, develop, and execute manual and automated test plans for web, mobile, and AI-driven applications.
- Explore AI product use cases by simulating real-world scenarios and role-playing different types of users to identify limitations, edge cases, biases, and inaccuracies.
- Perform functional, regression, integration, and non-functional testing (e.g., performance, usability, security).
- Collaborate with Product Managers, Data Scientists, and Engineers to define quality standards and validate AI model outputs for accuracy, fairness, and reliability.
- Assess product quality from multiple angles, including end-to-end integration, interpretability of results, and UI/UX experience.
- Build and maintain test automation frameworks (e.g., Cypress, Selenium, Playwright, JUnit) to increase test efficiency and coverage.
- Support CI/CD pipelines with integrated automated testing for smooth, reliable releases.
- Clearly document test results, bugs, and edge-case analyses, providing actionable feedback.
- Champion a culture of quality throughout the product lifecycle, from requirements to release and beyond.
- Stay current with AI testing methodologies and propose innovative approaches to validate model performance and safety.
- 3+ years of QA experience (manual and automated testing).
- Strong ability to design test strategies, test cases, and automation frameworks.
- Hands-on experience with automation tools such as Cypress, Selenium, Playwright, JUnit, or similar.
- Familiarity with CI/CD pipelines and test integration within development workflows.
- Experience testing complex systems with multiple integration points (APIs, data pipelines, front-end, back-end).
- Excellent analytical and problem-solving skills, with a strong “think like a user” mindset.
- Effective communication skills to clearly report bugs, risks, and test results to both technical and non-technical stakeholders.
- 1+ year of experience testing AI/ML-based products, with knowledge of bias, model drift, fairness, and interpretability.
- Knowledge of Python and TypeScript, data pipelines, or AI testing frameworks to better collaborate with data scientists.