Machine Learning Engineer
Remote
Salary - $160-180k
Summary:
This individual will be responsible for design, build, and maintenance of machine learning models. The MLOps Engineer will play an integral role in implementing artificial intelligence solutions the organization collaborating with data scientists, data team members, and clinical operations to deploy, monitor, and maintain machine learning solutions that will improve patient care, support operational excellence, and advance clinical research.
What you will do:
- Production Deployment and Model Engineering: Proven experience in deploying and maintaining production-grade machine learning models, with real-time inference, scalability, and reliability.
- Scalable ML Infrastructures: Proficiency in developing end-to-end scalable ML infrastructures using on-premise cloud platforms such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Azure.
- Engineering Leadership: Ability to lead engineering efforts in creating and implementing methods and workflows for ML/GenAI model engineering, LLM advancements, and optimizing deployment frameworks while aligning with business strategic directions.
- AI Pipeline Development: Experience in developing AI pipelines for various data processing needs, including data ingestion, preprocessing, and search and retrieval, ensuring solutions meet all technical and business requirements.
- Collaboration: Demonstrated ability to collaborate with data scientists, data engineers, analytics teams, and DevOps teams to design and implement robust deployment pipelines for continuous improvement of machine learning models.
- Continuous Integration/Continuous Deployment (CI/CD) Pipelines: Expertise in implementing and optimizing CI/CD pipelines for machine learning models, automating testing and deployment processes.
What gets you the job:
- Experience in managing end-to-end ML lifecycle.
- Experience in managing automation with Terraform.
- Containerization technologies (e.g., Docker) or container orchestration platforms (e.g., Kubernetes).
- CI/CD tools (e.g., Github Actions).
- Programming languages and frameworks (e.g., Python, R, SQL).
- Deep understanding of coding, architecture, and deployment processes
- Strong understanding of critical performance metrics.
- Extensive experience in predictive modeling, LLMs, and NLP
- Exhibit the ability to effectively articulate the advantages and applications of the RAG framework with LLMs
Minimum Education:
Bachelor’s degree computer science, artificial intelligence, informatics or closely related field.
Master’s degree in computer science, engineering or closely related field.