DRISHTICON Inc company
We have an immediate requirement for a Backend Engineer with one of our startup clients in USA.
The position is contract and 100% remote. Need to work during PST business hours (8am to 5pm PST)
As a Backend Engineer specializing in AI/ML, you will:
Design, build, and maintain scalable backend services and microservices that support high-throughput product data pipelines.
Architect and implement data ingestion systems that can handle multiple formats (spreadsheets, JSON, CSV, XML, images, PDFs, APIs) and normalize them into internal canonical representations.
Build reliable, low-latency inference services (e.g. attribute extraction, classification, entity resolution, enrichment) that integrate with our ML models.
Manage and optimize data storage and retrieval — e.g. relational databases, NoSQL, vector databases / embeddings stores.
Implement asynchronous processing, event-driven architectures, and workflow orchestration to support long-running ETL / ML jobs.
Design APIs (internal and external) to expose data and services (REST, gRPC, streaming) and ensure versioning, backward compatibility, and performance.
Work with DevOps/Infrastructure teams to deploy, monitor, log, and scale systems in Google Cloud (GCP) — managing containers, serverless, Security, IAM, and resource optimization.
Ensure robustness, fault tolerance, security, and proper error handling in all systems.
Mentor junior engineers, conduct code reviews, and promote best practices around testing, documentation, and observability.
Must-Have Qualifications :
Strong Python experience (5+ years), with deep comfort writing production-quality, maintainable code.
Solid backend architecture skills: experience designing and building microservices, APIs, message queues, event-driven systems.
Familiarity with AI/ML pipelines: you understand how to integrate model inference into production, deal with batching, latency, scaling, and versioning.
Experience with data processing / ETL: parsing, cleaning, transforming data at scale.
Cloud experience, especially Google Cloud Platform (GCP): working with services like Cloud Functions, Cloud Run, Kubernetes / GKE, BigQuery, Pub/Sub, Cloud Storage, IAM, etc.
Knowledge of database systems: relational (PostgreSQL, MySQL), NoSQL (e.g. MongoDB, Cassandra), and possibly vector stores / specialized embedding stores.
Understanding of containerization (Docker) and orchestration.
Familiar with observability, logging, monitoring, alerting tools (e.g. Stackdriver, Prometheus, Grafana, OpenTelemetry).
Strong debugging, profiling, optimization skills (memory, CPU, I/O).
Excellent collaboration, communication, and code review discipline.
Nice-to-Have / Differentiators :
Experience with transformer-based models, embeddings, semantic search, or large language model API integration.
Exposure to agentic or multi-agent systems, A2A protocols, Model Context Protocol (MCP) or other AI agent standards (aligned with company focus on AI-agent commerce).
Experience with vector databases (e.g. Pinecone, Milvus, FAISS) or hybrid retrieval systems.
Experience in eCommerce, product data pipelines, taxonomies, catalogs, or marketplace integrations.
Experience with Graph databases, knowledge graphs, or semantic networks.
Familiarity with infrastructure-as-code (Terraform, CloudFormation, etc.).
Security, compliance, data governance, or privacy experience.
What You Bring to the Team :
A mindset of quality, reliability, and scalability — you build systems that last.
Curiosity about emerging AI/ML technologies and how to bring them into production.
Ability to balance delivering features with technical debt, maintainability, and robustness.
Ownership and accountability — you’re comfortable driving projects end to end.
Excellent communication and ability to mentor others and work cross-functionally (e.g. AI scientists, product, design).