Skip to content
Tesco India Bengaluru, Karnataka, India Hybrid Full-Time Permanent Apply by 23-Jun-2025
About the role
Hadoop SDE 3 JD: 
Position Summary...
As a Software Development Engineer 3 at Tesco; you hold a senior Individual Contributor 
role; demonstrating active technical leadership with proven impact across teams and the 
wider directorate. You take ownership and accountability for product development within 
your domain; contributing to organisational capabilities through coaching; mentoring; and 
involvement in hiring processes.
In this position; you drive the development of data engineering solutions; ensuring a 
balance between functional and non-functional requirements. Your responsibilities 
include planning and leading data engineering activities for strategic; large; and complex 
programs; monitoring the application of data standards and architectures; and 
contributing to organisational policies and guidelines for data engineering.
As a technical leader; you play a crucial role in influencing technology choices; providing 
perspective across teams; and actively participating in critical product design and 
architecture efforts. You excel in delivering impactful projects; demonstrating technical 
expertise; and contributing to the maturity of development processes. Comfortable in 
managing priorities and navigating ambiguity; you utilize data-driven decision-making and 
effectively communicate with your team and stakeholders.
Working within a multidisciplinary agile team; you are hands-on with product 
implementation across the entire stack; including infrastructure and platform-related 
efforts. Your role involves developing scalable data applications; ETL; Data Lake 
implementation; and Analytics processing pipelines. Additionally; you lead a team of Data 
Engineers in a technical capacity; providing guidance and mentorship to junior and mid-
level team members.
Mandatory skills: Spark; Scala or Java (Scala preferred); Cloud Native; SQL and Data 
structures
Experience range: 8 to 11.
Skills
• Experience in Hadoop; Spark and Distributed computing frameworks.
• Professional hand-on experience in Scala.
• Professional hand-on experience in Sql and Query Optimization.
• Strong computer science fundamentals: Data Structure and Algorithms
• Experience in programming design patterns.
• Experience in system design.
• Should have Experience with CI/CD tool like Git; Docker and Jenkins etc.

• Should have hand-on experience in Data processing and Data manipulation skills like 
Data warehousing concepts; SCD types etc.
• At least one cloud exposure (Azure is preferred).
• Exposure to Streaming data use-cases via kafka or structured streaming etc.
• Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie)
• Exposure to multi-hop Architecture would be added advantage
• Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage
• Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg 
is desirable
• Prefer to have Experience in Data Governance tools such as Alation; Collibra; Data 
Quality;Data Lineage and MetaData
What you'll do:
• Big data development experience for large enterprise
• Experience in building streaming pipeline using spark streaming; Kafka/event hub or 
similar technology stack
• Experience to develop; implement and tune distributed data processing pipelines that 
process large volume of data
• Strong experience in Architecture and Design for building enterprise scale product with 
focus on scalability; low-latency; and fault-tolerance.
• Expertise in writing complex; highly-optimized SQL queries & Data Pipelines across 
large data sets
• Must be able to work effectively in a team setting as well as individually.
• Ability to communicate and collaborate with external teams and stakeholders.
• Must have growth mindset to learn and contribute across different modules or services.
What you'll bring:
- Working knowledge of Big Data Engineering; Ingestion & Integration with Third party / in 
house Data sources; built Metadata Management; Data Quality; Master Data Management 
Solutions;Structured/Unstructured data
- Excellent written and oral communications kills.
- Self-starter with quick learning abilities.

- Multi-task and should be able to work under stringent deadlines.
- Ability to understand and work on various internal systems.
- Ability to work with multiple stakeholders.
- Leading multi-skilled agile team and delivering high visibility and high impact projects
Minimum Qualifications:
Bachelor's degree in computer science; information technology; engineering; information 
systems and 8+ years’ experience in software engineering or related area at a technology; 
retail; or data-driven company.
Qualifications
Mandatory skills: Spark; Scala or Java (Scala preferred); Hadoop; System Design; Data 
Architecture; ;Cloud Native; SQL and Data structures
• Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage
• Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg 
is desirable
• Exposure to Streaming data use-cases via kafka or structured streaming etc.
• Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie)
 
What is in it for you
At Tesco, we are committed to providing the best for you. 
 
As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. 
 
Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits.  
 
Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. 
 
Salary - Your fixed pay is the guaranteed pay as per your contract of employment. 
 
Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. 
 
Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. 
 
Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. 
 
Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents.  
 
Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request.  
 
Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan.  
 
Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. 
You will be responsible for
Hadoop SDE 3 JD: 
Position Summary...
As a Software Development Engineer 3 at Tesco; you hold a senior Individual Contributor 
role; demonstrating active technical leadership with proven impact across teams and the 
wider directorate. You take ownership and accountability for product development within 
your domain; contributing to organisational capabilities through coaching; mentoring; and 
involvement in hiring processes.
In this position; you drive the development of data engineering solutions; ensuring a 
balance between functional and non-functional requirements. Your responsibilities 
include planning and leading data engineering activities for strategic; large; and complex 
programs; monitoring the application of data standards and architectures; and 
contributing to organisational policies and guidelines for data engineering.
As a technical leader; you play a crucial role in influencing technology choices; providing 
perspective across teams; and actively participating in critical product design and 
architecture efforts. You excel in delivering impactful projects; demonstrating technical 
expertise; and contributing to the maturity of development processes. Comfortable in 
managing priorities and navigating ambiguity; you utilize data-driven decision-making and 
effectively communicate with your team and stakeholders.
Working within a multidisciplinary agile team; you are hands-on with product 
implementation across the entire stack; including infrastructure and platform-related 
efforts. Your role involves developing scalable data applications; ETL; Data Lake 
implementation; and Analytics processing pipelines. Additionally; you lead a team of Data 
Engineers in a technical capacity; providing guidance and mentorship to junior and mid-
level team members.
Mandatory skills: Spark; Scala or Java (Scala preferred); Cloud Native; SQL and Data 
structures
Experience range: 8 to 11.
Skills
• Experience in Hadoop; Spark and Distributed computing frameworks.
• Professional hand-on experience in Scala.
• Professional hand-on experience in Sql and Query Optimization.
• Strong computer science fundamentals: Data Structure and Algorithms
• Experience in programming design patterns.
• Experience in system design.
• Should have Experience with CI/CD tool like Git; Docker and Jenkins etc.

• Should have hand-on experience in Data processing and Data manipulation skills like 
Data warehousing concepts; SCD types etc.
• At least one cloud exposure (Azure is preferred).
• Exposure to Streaming data use-cases via kafka or structured streaming etc.
• Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie)
• Exposure to multi-hop Architecture would be added advantage
• Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage
• Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg 
is desirable
• Prefer to have Experience in Data Governance tools such as Alation; Collibra; Data 
Quality;Data Lineage and MetaData
What you'll do:
• Big data development experience for large enterprise
• Experience in building streaming pipeline using spark streaming; Kafka/event hub or 
similar technology stack
• Experience to develop; implement and tune distributed data processing pipelines that 
process large volume of data
• Strong experience in Architecture and Design for building enterprise scale product with 
focus on scalability; low-latency; and fault-tolerance.
• Expertise in writing complex; highly-optimized SQL queries & Data Pipelines across 
large data sets
• Must be able to work effectively in a team setting as well as individually.
• Ability to communicate and collaborate with external teams and stakeholders.
• Must have growth mindset to learn and contribute across different modules or services.
What you'll bring:
- Working knowledge of Big Data Engineering; Ingestion & Integration with Third party / in 
house Data sources; built Metadata Management; Data Quality; Master Data Management 
Solutions;Structured/Unstructured data
- Excellent written and oral communications kills.
- Self-starter with quick learning abilities.

- Multi-task and should be able to work under stringent deadlines.
- Ability to understand and work on various internal systems.
- Ability to work with multiple stakeholders.
- Leading multi-skilled agile team and delivering high visibility and high impact projects
Minimum Qualifications:
Bachelor's degree in computer science; information technology; engineering; information 
systems and 8+ years’ experience in software engineering or related area at a technology; 
retail; or data-driven company.
Qualifications
Mandatory skills: Spark; Scala or Java (Scala preferred); Hadoop; System Design; Data 
Architecture; ;Cloud Native; SQL and Data structures
• Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage
• Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg 
is desirable
• Exposure to Streaming data use-cases via kafka or structured streaming etc.
• Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie)
 
You will need
Hadoop SDE 3 JD: 
Position Summary...
As a Software Development Engineer 3 at Tesco; you hold a senior Individual Contributor 
role; demonstrating active technical leadership with proven impact across teams and the 
wider directorate. You take ownership and accountability for product development within 
your domain; contributing to organisational capabilities through coaching; mentoring; and 
involvement in hiring processes.
In this position; you drive the development of data engineering solutions; ensuring a 
balance between functional and non-functional requirements. Your responsibilities 
include planning and leading data engineering activities for strategic; large; and complex 
programs; monitoring the application of data standards and architectures; and 
contributing to organisational policies and guidelines for data engineering.
As a technical leader; you play a crucial role in influencing technology choices; providing 
perspective across teams; and actively participating in critical product design and 
architecture efforts. You excel in delivering impactful projects; demonstrating technical 
expertise; and contributing to the maturity of development processes. Comfortable in 
managing priorities and navigating ambiguity; you utilize data-driven decision-making and 
effectively communicate with your team and stakeholders.
Working within a multidisciplinary agile team; you are hands-on with product 
implementation across the entire stack; including infrastructure and platform-related 
efforts. Your role involves developing scalable data applications; ETL; Data Lake 
implementation; and Analytics processing pipelines. Additionally; you lead a team of Data 
Engineers in a technical capacity; providing guidance and mentorship to junior and mid-
level team members.
Mandatory skills: Spark; Scala or Java (Scala preferred); Cloud Native; SQL and Data 
structures
Experience range: 8 to 11.
Skills
• Experience in Hadoop; Spark and Distributed computing frameworks.
• Professional hand-on experience in Scala.
• Professional hand-on experience in Sql and Query Optimization.
• Strong computer science fundamentals: Data Structure and Algorithms
• Experience in programming design patterns.
• Experience in system design.
• Should have Experience with CI/CD tool like Git; Docker and Jenkins etc.

• Should have hand-on experience in Data processing and Data manipulation skills like 
Data warehousing concepts; SCD types etc.
• At least one cloud exposure (Azure is preferred).
• Exposure to Streaming data use-cases via kafka or structured streaming etc.
• Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie)
• Exposure to multi-hop Architecture would be added advantage
• Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage
• Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg 
is desirable
• Prefer to have Experience in Data Governance tools such as Alation; Collibra; Data 
Quality;Data Lineage and MetaData
What you'll do:
• Big data development experience for large enterprise
• Experience in building streaming pipeline using spark streaming; Kafka/event hub or 
similar technology stack
• Experience to develop; implement and tune distributed data processing pipelines that 
process large volume of data
• Strong experience in Architecture and Design for building enterprise scale product with 
focus on scalability; low-latency; and fault-tolerance.
• Expertise in writing complex; highly-optimized SQL queries & Data Pipelines across 
large data sets
• Must be able to work effectively in a team setting as well as individually.
• Ability to communicate and collaborate with external teams and stakeholders.
• Must have growth mindset to learn and contribute across different modules or services.
What you'll bring:
- Working knowledge of Big Data Engineering; Ingestion & Integration with Third party / in 
house Data sources; built Metadata Management; Data Quality; Master Data Management 
Solutions;Structured/Unstructured data
- Excellent written and oral communications kills.
- Self-starter with quick learning abilities.

- Multi-task and should be able to work under stringent deadlines.
- Ability to understand and work on various internal systems.
- Ability to work with multiple stakeholders.
- Leading multi-skilled agile team and delivering high visibility and high impact projects
Minimum Qualifications:
Bachelor's degree in computer science; information technology; engineering; information 
systems and 8+ years’ experience in software engineering or related area at a technology; 
retail; or data-driven company.
Qualifications
Mandatory skills: Spark; Scala or Java (Scala preferred); Hadoop; System Design; Data 
Architecture; ;Cloud Native; SQL and Data structures
• Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage
• Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg 
is desirable
• Exposure to Streaming data use-cases via kafka or structured streaming etc.
• Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie)
 
About us
Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. 
 
Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues 
 
Tesco Technology
 
Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. 
 
At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built.