Research Scientist - LTX Model Quality
Lightricks
Who we are
Lightricks is an AI-first company creating next-generation content creation technology for businesses, enterprises, and studios with a mission to bridge the gap between imagination and creation. At our core is LTX-2, an open-source generative video model, built to deliver expressive, high-fidelity video at unmatched speed. It powers both our own products and a growing ecosystem of partners through API integration.
The company is also known globally for pioneering consumer creativity through products like Facetune, one of the world’s most recognized creative brands, which helped introduce AI-powered visual expression to hundreds of millions of users worldwide. We combine deep research, user-first design, and end-to-end execution from concept to final render to bring the future of expression to all.
The role
Following the success of LTX-2, our widely adopted open-source text-to-audio+video model, we are expanding our efforts to develop cutting-edge audio+video generation models and are hiring Research Scientists to join our LTX-Applications team.
As a Research Scientist in the LTX Model Quality team, you will play a key role in elevating the quality, controllability, and alignment of our video generation model. This role focuses on the critical post-training phase—developing and implementing techniques such as preference optimization, reward modeling, and human feedback integration to refine model outputs. You will design robust evaluation frameworks, define quality metrics, and build systematic approaches to identify and address model failure modes. Your work will directly impact the quality of the videos we generate.
What you will be doing
- Develop and implement post-training pipelines, including RLHF, DPO, and other preference-based optimization techniques for video generation models.
- Fine-tune and control VLLMs for video and audio understanding.
- Design and iterate on quality evaluation metrics and frameworks.
- Conduct systematic failure mode analysis and develop targeted interventions to address quality gaps.
- Build and curate high-quality preference datasets and evaluation benchmarks that capture nuanced aspects of video generation quality.
- Collaborate closely with fellow researchers to establish tight feedback loops between human judgment and model improvement.
Your skills and experience
- Experience with post-training techniques for generative or multimodal models.
- Strong understanding of evaluation methodology, quality metrics, and benchmark design for generative AI.
- Solid software engineering skills and comfort working with complex ML training infrastructure.Understanding of relevant topics in statistics, experimental design, and perceptual quality assessment.
- Ability to translate subjective quality assessments into measurable, actionable model improvements.
- Enjoys iterative, detail-oriented work and takes pride in systematically improving model outputs.
- Loves diving into the data, curating, filtering, and specializing it for the teams’ tasks.