As a member of the Data team, you will craft and build cloud-based applications and data systems that will support the enterprise data management and analytics capabilities.
You will join the core team that will be responsible for the design, development, and implementation while working with internal and external business and technology partners.
You will support and define system architecture, technology stack and its tactical implementation.
You will support by managing large datasets, technical challenges and support streaming data in cloud ecosystems and hybrid environments, building large and complex data pipelines, integrating incoming data from multiple sources in different formats, implement continuous integration/continuous delivery pipelines, automate everything we can get our hands on, and be part of the API-first design principle implementation.
You will help build scalable distributed solutions while working across full stack.
You must understand the software development life cycle (SDLC) in Waterfall, Lean, and Agile work environments
Previous experience in capital markets preferred as well as willingness to learn multiple languages.
Experience working with Cloud ecosystems such as AWS, Azure, GCP
5+ years of hands-on experience with Big Data and distributed stream processing frameworks such as Hadoop, Kafka, Hive, and Presto.
3+ years of hands-on experience with stream processing engines such as Apache Storm, Spark, Flink, or Beam
Knowledge of DevOps tools and technologies i.e., Terraform, Git, and Jenkins
Experience with table formats such as Iceberg, Delta Lake, or Hudi is a plus
Multiple programming languages experience such as Python, Java, and Scala
Strong knowledge of SQL
Familiarity with Kubernetes and container orchestration technologies such Rancher, EKS, or GKE
Experience with data storage formats such as Apache Parquet, Avro, or ORC