Our client, a leading insurer, is looking for a data engineer to join the team.
Â
Responsibilities
Design and develop scalable data platforms with ETL/ELT pipelines for diverse data sources.
Implement automation using CI/CD pipelines to optimize workflows.
Manage Data Lakes, relational databases (e.g., PostgreSQL), and NoSQL databases (e.g., MongoDB).
Utilize PySpark for distributed data processing and Apache Kafka
Ensure data quality, security, and compliance with regulations
Collaborate with data scientists and analysts to support analytics initiatives.
Optimize pipelines for performance and cost-efficiency across AWS and Azure.
Contribute to key business areas like risk modeling and finance processing.
Â
Requirements
Bachelor’s or Master’s in Computer Science, Engineering, or related field.
3+ years of data engineering experience.
Expertise in AWS, Azure
Proficient in Python, SQL, and NoSQL databases (e.g., MongoDB).
Hands-on experience with PySpark and Apache Kafka.
Skilled in Data Lakes and automation tools
Strong problem-solving and communication skills for agile environments.
Â
If this outstanding opportunity sounds like your next career move, please submit through "Apply Now" or send your resume in Word format to Harry Tsang at resume@pinpointasia.com and put Data Engineer - Leading Insurance Company in the subject header.
Â
Data provided is for recruitment purposes only.




