Skip to main content

Protéjase contra el fraude en la contratación. Somos conscientes de que personas no autorizadas pueden haberse hecho pasar por reclutadores de Cargill, haber establecido contacto sobre oportunidades de empleo y haber extendido ofertas de trabajo a través de mensajes de texto, mensajes instantáneos o salas de chat. Para garantizar que una oferta de empleo es legítima, debe aparecer en el sitio web Cargill.com/Careers. Aprenda a protegerse del fraude en la contratación.

Senior Data Engineer - Ag & Trading

Postulate ahora
ID de la oferta 326153 Fecha de publicación 05/07/2026 Location : Bangalore, India Category  Digital Data & Technology Job Status  Salaried Full Time

Job Purpose and Impact

  • The Senior Data Engineer designs, builds, and operates scalable, reliable data products and platforms that power analytics, reporting, and downstream applications. This role owns end‑to‑end delivery of batch and streaming data pipelines on a modern AWS‑based cloud data platform, applying strong engineering patterns to ensure performance, security, observability, and cost efficiency.

    With minimal supervision, the role partners closely with product, analytics, and platform teams to translate business requirements into robust technical solutions across a Lakehouse (Iceberg) and approved warehousing platforms (e.g., Snowflake). The Senior Data Engineer also mentors other engineers, drives code quality, and raises the engineering bar across the organization.

Key Accountabilities

Data & Analytical Solutions

  • Designs and delivers scalable data products using standard cloud and data engineering architectures.
  • Owns technical decisions (batch vs. streaming, Lakehouse vs. warehouse) and ensures solutions meet reliability, security, governance, latency, and cost requirements.
  • Reviews designs and contributes reusable components, templates, and standards.

Data Pipelines

  • Builds and operates end‑to‑end batch and streaming pipelines.
  • Implements transformations using SQL/dbt and PySpark as needed.
  • Integrates real‑time or event‑driven ingestion using Kafka.
  • Orchestrates workflows with Airflow; establishes SLAs/SLOs and CI/CD‑based deployments.

Data Systems & Architecture

  • Optimizes data architectures for performance, scalability, and cost.
  • Applies best practices for Iceberg table design, incremental processing, and query optimization across Hive, Impala, Snowflake, and RDBMS.
  • Diagnoses systemic issues and drives remediation with platform teams.

Data Infrastructure (AWS)

  • Leads technical readiness across dev/test/prod environments.
  • Works hands‑on with AWS services including S3, Glue, Lambda, IAM, and SageMaker.
  • Partners with governance and platform teams on access control, tagging, and operational support.

Data Modeling & Formats

  • Leads modeling across RAW, CURATED, and SERVING layers.
  • Applies dimensional or normalized models for correctness, performance, and usability.
  • Implements efficient formats (Parquet + Iceberg) with clear schema evolution strategies.

DevOps & CI/CD

  • Designs and improves Git‑based CI/CD pipelines and infrastructure‑as‑code using Terraform.
  • Ensures quality gates, auditability, and compliance with governance requirements.

Stakeholder & Engineering Leadership

  • Partners with product, analytics, and platform teams to align on requirements, data contracts, and SLAs.
  • Communicates complex technical topics clearly and leads technical discussions.
  • Coaches engineers and raises engineering standards through reviews and documentation.

AI‑First & Product Mindset

  • Uses GenAI‑assisted development responsibly to accelerate delivery.
  • Builds products, not just pipelines, focusing on usability, adoption, reliability, and lifecycle ownership.
  • Designs systems end‑to‑end and continuously optimizes cost‑performance trade‑offs using metrics.

Qualifications

  • 8+ years of total experience with 6+ years of  Data Engineering experience.
  • Strong expertise in AWS‑based data engineering and scalable cloud architectures
  • Proven experience building end‑to‑end batch and streaming pipelines, including Kafka
  • Advanced proficiency in SQL, Hive, Impala, and PostgreSQL / RDBMS
  • Strong programming skills in Python and PySpark
  • Hands‑on experience with AWS Glue, Lambda, S3, IAM, and SageMaker
  • Experience with Snowflake and modern data warehousing
  • Expertise in CI/CD, Terraform, and DevOps practices
  • Proficiency in Airflow for workflow orchestration
  • Experience with Power BI for data visualization and reporting
  • Strong foundation in data modeling, performance optimization, and large‑scale data systems
Postulate ahora

Bolsa de trabajo de LinkedIn

Encuentre el área de Cargill más adecuada para usted. Inicie sesión en su cuenta de LinkedIn y nosotros nos encargaremos de buscar los trabajos que mejor se ajusten a sus habilidades y experiencia.

Encuentre su trabajo ideal

Cacao sustentable

El Compromiso de Cacao de Cargill se compromete a garantizar un sector de cacao que pueda progresar para las generaciones futuras.

Más información

Diversidad, Equidad e Inclusión

Nuestra cultura de inclusión nos ayuda a darle forma al futuro del mundo.

Más información

Nuestro Informe
Annual

Lee el Informe Anual de Cargill para conocer cómo estamos ayudando a transformar la alimentación y la agricultura para construir un mundo con seguridad alimentaria.

Más información

Ver todas nuestras oportunidades

Thrive