Skip to main content

At Cargill, we care about your safety and want your job search experience to be a positive one. Unfortunately, there are scams out there where individuals pretend to be Cargill recruiters to try and collect personal information or request payment. Please know that Cargill will never ask you for money during the hiring process, and in most cases, we only accept applications through our official careers site, with the exception of some roles in our production plants. If something doesn't feel right or you have questions, don't hesitate to contact us. To learn more, visit our Notice on Fraudulent Job Offers.

Senior Data Engineer - Ag & Trading

Apply Now
Job ID 326153 Date posted 05/07/2026 Location : Bengaluru, India Category  DIGITAL TECHNOLOGY AND DATA (DT&D) Job Status  Salaried Full Time

Job Purpose and Impact

  • The Senior Data Engineer designs, builds, and operates scalable, reliable data products and platforms that power analytics, reporting, and downstream applications. This role owns end‑to‑end delivery of batch and streaming data pipelines on a modern AWS‑based cloud data platform, applying strong engineering patterns to ensure performance, security, observability, and cost efficiency.

    With minimal supervision, the role partners closely with product, analytics, and platform teams to translate business requirements into robust technical solutions across a Lakehouse (Iceberg) and approved warehousing platforms (e.g., Snowflake). The Senior Data Engineer also mentors other engineers, drives code quality, and raises the engineering bar across the organization.

Key Accountabilities

Data & Analytical Solutions

  • Designs and delivers scalable data products using standard cloud and data engineering architectures.
  • Owns technical decisions (batch vs. streaming, Lakehouse vs. warehouse) and ensures solutions meet reliability, security, governance, latency, and cost requirements.
  • Reviews designs and contributes reusable components, templates, and standards.

Data Pipelines

  • Builds and operates end‑to‑end batch and streaming pipelines.
  • Implements transformations using SQL/dbt and PySpark as needed.
  • Integrates real‑time or event‑driven ingestion using Kafka.
  • Orchestrates workflows with Airflow; establishes SLAs/SLOs and CI/CD‑based deployments.

Data Systems & Architecture

  • Optimizes data architectures for performance, scalability, and cost.
  • Applies best practices for Iceberg table design, incremental processing, and query optimization across Hive, Impala, Snowflake, and RDBMS.
  • Diagnoses systemic issues and drives remediation with platform teams.

Data Infrastructure (AWS)

  • Leads technical readiness across dev/test/prod environments.
  • Works hands‑on with AWS services including S3, Glue, Lambda, IAM, and SageMaker.
  • Partners with governance and platform teams on access control, tagging, and operational support.

Data Modeling & Formats

  • Leads modeling across RAW, CURATED, and SERVING layers.
  • Applies dimensional or normalized models for correctness, performance, and usability.
  • Implements efficient formats (Parquet + Iceberg) with clear schema evolution strategies.

DevOps & CI/CD

  • Designs and improves Git‑based CI/CD pipelines and infrastructure‑as‑code using Terraform.
  • Ensures quality gates, auditability, and compliance with governance requirements.

Stakeholder & Engineering Leadership

  • Partners with product, analytics, and platform teams to align on requirements, data contracts, and SLAs.
  • Communicates complex technical topics clearly and leads technical discussions.
  • Coaches engineers and raises engineering standards through reviews and documentation.

AI‑First & Product Mindset

  • Uses GenAI‑assisted development responsibly to accelerate delivery.
  • Builds products, not just pipelines, focusing on usability, adoption, reliability, and lifecycle ownership.
  • Designs systems end‑to‑end and continuously optimizes cost‑performance trade‑offs using metrics.

Qualifications

  • 8+ years of total experience with 6+ years of  Data Engineering experience.
  • Strong expertise in AWS‑based data engineering and scalable cloud architectures
  • Proven experience building end‑to‑end batch and streaming pipelines, including Kafka
  • Advanced proficiency in SQL, Hive, Impala, and PostgreSQL / RDBMS
  • Strong programming skills in Python and PySpark
  • Hands‑on experience with AWS Glue, Lambda, S3, IAM, and SageMaker
  • Experience with Snowflake and modern data warehousing
  • Expertise in CI/CD, Terraform, and DevOps practices
  • Proficiency in Airflow for workflow orchestration
  • Experience with Power BI for data visualization and reporting
  • Strong foundation in data modeling, performance optimization, and large‑scale data systems
Apply Now

LinkedIn Job Matcher

Find where you fit in at Cargill. Log in to connect your LinkedIn profile and we’ll use your skills and experience to search the jobs that might be right for you.

Find Your Match

Our
stories

Learn how our purpose drives everything we do.

Learn More (Sustainable Coco)

Diversity,
Equity
& Inclusion

Our inclusive culture helps us shape the future of the world.

Learn More (Inclusion & Diversity)

Our Annual Report

Read Cargill’s Annual Report to see how we’re helping transform food and agriculture to build a food-secure world.

Learn More (Annual Report)

View All of Our Available Opportunities

Thrive