Logo da Candidata AI
Ícone de Pesquisa
Ícone de Localidade

Filtros

Regime

Modelo de Trabalho

Salário

Data de Publicação

Limpar Filtros

Senior MLOps Engineer - 100% Remote

Rehva Tech

Nova

Regime de contratação

Não Informado

Modelo de trabalho

Não Informado

Carga horária

40 horas semanais

Descrição


Who we are:

Fintech company reimagining the future of financial services. We are building intelligent infrastructure powered by AI, blockchain, and thoughtful design. Our products serve millions of entrepreneurs across Brazil and the US every day, helping them grow with tools that are fast, fair, and built for how business actually works.

Who We’re Looking For:

We're looking for an MLOps Engineer to help us build ML infrastructure that scales dynamically from dozens to thousands of GPUs, reliably and efficiently.

You’ll be part of the AI R&D team, working closely with researchers and engineers to design systems for training, evaluating, and monitoring machine learning models at scale. This isn’t a research position, but your work will directly support researchers running large-scale experiments. You’ll help build fault-tolerant pipelines that preserve progress even when things break (like OOMs), and ensure model development flows can iterate with confidence.

Our current focus is on large-scale, non-interactive workloads: batch training, dataset-wide model evaluation, and metric-driven improvement loops. That said, the infrastructure you build may later support interactive tools and APIs.

You'll be contributing to system design under the guidance of senior ML researchers and infra engineers. Your role is to bring modern tooling and practical engineering to a demanding, GPU-heavy environment.


Responsibilities:

Build and maintain ML pipelines for data processing, training, evaluation, and model deployment.

Orchestrate batch and training jobs in Kubernetes, handling retries, failures, and resource constraints.

Design systems that scale dynamically from small GPU jobs to thousands of GPUs on-demand.

Collaborate with researchers to productionize their experiments into reproducible, robust workflows.

Implement model serving endpoints (REST/gRPC) and integrate with internal tooling.

Set up monitoring, logging, and KPI tracking for ML pipelines and compute jobs.

Automate CI/CD and infra provisioning for ML workloads.

Manage experiment tracking, model versioning, and metadata with tools like MLflow or W&B.

Support model serving infrastructure that may be used by internal UIs or tools in the future.


Required Skills:

Kubernetes: Strong experience orchestrating jobs, not just deploying services. You should be confident in managing training workloads, GPU scheduling, job retries, and Helm-based deployments.

Python: Comfortable writing scripts and services that glue systems together. You don’t need to be a full-stack dev, but notebooks won’t cut it. Automation is the word here.

ML Workflows: Familiarity with data preprocessing, training, evaluation, and deployment pipelines.

Model Serving: Ability to expose models via FastAPI, TorchServe, or equivalent serving stacks.

Linux: Strong CLI skills, you should know your way around debugging compute-heavy jobs.

Experience with ML metadata systems (MLflow, W&B, Neptune).

Know how to work side by side with AI assistants and agents.

Ability to communicate and debate in English and Portuguese.

Nice-to-have skills:

Experience with orchestration tools (Airflow, Argo Workflows, Prefect).

Fluency in cloud environments (GCP, AWS, Azure).

Ability to write lean and customized Dockerfiles and Helm charts that run smoothly.

Exposure to distributed training frameworks (Ray, Horovod, Dask).

Deep understanding of GPU scheduling and tuning in Kubernetes environments.

Experience supporting LLM workloads or inference systems powering internal tools.


What You’ll Need to Succeed:

Curiosity about how things fail and how to make them not.

Strong debugging chops, especially in distributed, resource-constrained environments.

A practical mindset, you know when to patch and when to fix.

Ability to collaborate across ML, research, and backend teams.

Ownership: you care about keeping systems reliable, scalable, and clean. 

3258 Vagas de emprego

Nova
Candidata IA

Técnico em Processos Plásticos (Extrusão)

V12 Consulting

CLT
Presencial
Jundiaí - SP
Publicada há 9 horas
Inscrições até 09/09/25
Nova
Candidata IA

AUXILIAR DE PRODUÇÃO

Kaz Formaturas

CLT
Presencial
R$ 1.800,00 / Mês
São Paulo - SP
Publicada há 9 horas
Inscrições até 08/01/26
Nova
Candidata IA

ASSISTENTE CONTABIL

Kaz Formaturas

Pessoa Jurídica
Presencial
R$ 5.000,00 / Mês
São Paulo - SP
Publicada há 9 horas
Inscrições até 08/12/25
Nova
Candidata IA

GERENTE DE VENDAS IMOBILIÁRIAS (PJ)

MRV

Pessoa Jurídica
Presencial
Londrina - PR
Publicada há 10 horas
Inscrições até 29/08/25
Nova
Candidata IA

GERENTE DE VENDAS IMOBILIÁRIAS (PJ)

MRV

Pessoa Jurídica
Presencial
Gravataí - RS
Publicada há 10 horas
Inscrições até 29/08/25
Nova
Candidata IA

GERENTE DE VENDAS IMOBILIÁRIAS (PJ)

MRV

Pessoa Jurídica
Presencial
Canoas - RS
Publicada há 10 horas
Inscrições até 29/08/25