More jobs from this company
views: 0
Senior Software Development Engineer
The ASE Commerce Data Instrumentation and Integration team is responsible for building the systems that generate real-time, high-quality, privacy-conscious signals from commerce activities. These signals are used across Apple — powering everything from analytics and quality monitoring to fraud detection, machine learning, and more. As an engineer on this team, you'll work on scalable, distributed infrastructure that processes high-throughput, real-time data. You'll help define how events are collected and delivered at scale, and ensure our systems are designed with reliability, privacy, and flexibility at their core.
Preferred Qualifications
Key responsibilities include
- Designing and building real-time data pipelines and services that transform and deliver signals to a wide range of consumers
- Developing and maintaining instrumentation libraries used across ASE Commerce services
- Processing structured, semi-structured, and unstructured data across streaming and batch workflows
- Integrating with object stores, event streams, and data platforms
- Ensuring all systems are built with privacy, scalability, and observability as foundational principles
- Collaborating across groups and teams from conception to production
- Supporting downstream use cases such as analytics, fraud detection, quality monitoring, machine learning, and more
- Experience developing and maintaining distributed backend systems using Java or similar languages
- Familiarity with message-based systems and real-time data pipelines (e.g., Kafka)
- Deep understanding of distributed systems concepts, including fault tolerance and scalability
- Experience operating systems that support high-throughput workloads in production
- Strong collaboration and communication skills in a cross-functional environment
- Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or equivalent experience.
Preferred Qualifications
- Understanding of data privacy best practices and compliance standards (e.g., GDPR)
- Experience with stream and batch processing frameworks such as Apache Flink, Apache Spark, or Kafka Streams
- Experience working with cloud object storage (e.g., Amazon S3, Google Cloud Storage) and columnar data formats (e.g., Parquet, ORC)
- Familiarity with distributed state stores or in-memory data grids (e.g., Atomix, Hazelcast, RocksDB)
- Familiarity with data lake, data warehouse, or lakehouse technologies, including Hive, Trino, or Presto
- Experience in building or maintaining instrumentation frameworks or observability tooling is a plus
More jobs from this company
views: 0
Be the first to know aboutnew jobs every week
Get 8 new jobs with salaries, once per week! Sign up here so you don't miss a single newsletter.