Job Description
Adeva is a global talent network that enables work without boundaries by connecting tech professionals with top companies worldwide.
We’re looking for a Mid/Senior AI Application Engineer to lead the development and maintenance of our enterprise-grade AI products and platform. You’ll be responsible for building scalable AI-powered applications, integrating ML/LLM capabilities into production systems, and ensuring reliability across data pipelines, infrastructure, and deployment workflows.
This role is ideal for someone who enjoys working end-to-end: from application design, API development and orchestration, to deploying ML systems on cloud infrastructure. Experience building chatbot or conversational AI solutions is highly valued.
Responsibilities
- Lead the development and maintenance of AI applications and platform services used in production enterprise environments.
- Build scalable backend services and APIs using Python (FastAPI) and modern distributed systems practices.
- Develop AI application features such as chatbots, conversational interfaces, retrieval pipelines, and agent workflows.
- Design and implement async execution workflows using Celery/Redis and event-driven processing where relevant.
- Integrate AI/ML models into production workflows using Vertex AI, pipelines, or custom model-serving services.
- Work with large-scale datasets and enable AI features using BigQuery and Cloud Storage.
- Collaborate across Product, ML, Data Engineering, and Platform teams to ship reliable AI solutions.
- Improve system observability, reliability, and scalability for AI services (monitoring, logging, performance).
- Contribute to and maintain CI/CD pipelines for AI applications and ML deployment workflows.
- Provide technical leadership, mentorship, and best practices for production-grade AI engineering.
Requirements
- 4+ years experience as an AI Engineer / AI Application Engineer / Backend Engineer with strong AI exposure.
- Expert Python engineering skills (clean code, scalable architecture, strong debugging ability).
- Hands-on experience building and deploying AI-powered applications into production.
- Strong experience with backend development using FastAPI (or similar frameworks).
- Familiarity with production deployment workflows using Docker + Kubernetes (GKE preferred).
- Strong understanding of GCP services — especially BigQuery, Cloud Storage, Pub/Sub, Vertex AI.
- Experience working with large-scale data and supporting ML/AI workflows.
- Strong ownership mindset: able to lead initiatives from design to deployment.
Nice-to-Have
- Chatbot or conversational AI development experience (LLM-based or classical NLP chatbots).
- Experience with:
- RAG (Retrieval-Augmented Generation).
- embeddings / vector search.
- agent frameworks or orchestration tools.
- Experience with streaming/event-driven systems (Kafka, Pub/Sub).
- Retail industry experience or building AI solutions for retail use cases.
- Experience designing microservices and internal APIs at scale (gRPC, event messaging, distributed systems).
About Adeva
Adeva is an exclusive network of engineers, product and data professionals that connects consultants with leading enterprise organizations and startups. Our network is distributed all over the world, with engineers in more than 35 countries. Our company culture builds connections, careers, and employee growth. We are creating a workplace from the future that values flexibility, autonomy, and transparency. If that sounds like something you’d like to be part of, we’d love to hear from you.