About Anterior:
Anterior is on a mission to transform healthcare administration, making it seamless and invisible so clinicians can focus on delivering care. We’ve built an AI-powered platform designed by clinicians, for clinicians, to simplify administrative workflows and improve patient outcomes. By combining clinical expertise with cutting-edge technology, we’re revolutionizing healthcare operations with responsible AI.
We’ve raised $23.6M in total funding from New Enterprise Associates (NEA), Sequoia Capital, and Neo along with notable angels including the founders of DeepMind, Google AI, and Inflection AI. You can learn more about us here!
About Foundation Labs at Anterior:
Foundation Labs is Anterior’s applied research arm, where we turn cutting-edge AI research into real-world impact. Here, “applied” means “shipped”—we focus on delivering innovative AI systems that power transformative healthcare solutions. This unique domain sits at the intersection of research and infrastructure, solving complex challenges to deploy state-of-the-art technologies at scale.
We’re looking for exceptional, self-driven engineers to join as founding members of Foundation Labs. In this role, you’ll push the boundaries of what’s possible in AI, working on LLM inference systems that are accurate, scalable, and production-ready. This is a rare opportunity to help shape the future of healthcare AI in a fast-paced and collaborative environment.
The Role:
As a Founding Research Engineer, you’ll play a critical role in building and scaling our AI models and inference pipelines . You’ll work closely with both infrastructure experts and AI researchers to bring advanced research concepts into our production system.
We hold a high standard at Anterior and are biased towards candidates who teach our team something new.
What You’ll Do:
Show us the future. Rapidly prototype and iterate frontier solutions in collaboration with product and clinical teams.
Design and implement the architecture for hosting and scaling LLM-based systems from the ground up.
Apply research or industry experience to optimize our NLP systems for speed, scalability, and cost, tailored to enterprise customers.
Fine tune and serve models or agent-networks towards novel applications that require out-of-the-box thinking.
About You:
Programming expertise: Proficient in Python, Golang, or Rust with a proven track record of building high-quality software.
AI systems knowledge: Hands-on experience collaborating with research teams to improve inference quality on multi-GPU systems.
LLM experience: Familiarity with working on large language models (e.g., OAI, Anthropic, Mistral) and smaller language models (e.g., Llama, Phi).
Systems thinker: Strong understanding of distributed data processing and large-scale system architecture.
Problem-solving mindset: Ability to navigate complex problems and effectively communicate solutions.
Self-driven: Highly motivated, curious, and adaptable in fast-paced environments.
Preferred Qualifications:
A track record of building large-scale knowledge bases.
Expertise in deploying machine learning models with 70B+ parameters or multi-GPU systems.
Expertise in model distillation
Benefits:
📈 Early-Stage Equity
💰 Competitive, top-of-market salary
🫄 100% covered health, dental, and vision insurance
🍲 Catered lunches and a stocked kitchen
🚍 Commuter benefits
💻 Company Laptop along w/ tools you need to succeed
🧠 Learning & development budget
🎤 Team-building events
🌴 Flexible PTO