Kafka Engineer

Industry: Retail
Remote
Company size: 10K+
Full-time/Part-time
Not disclosed

Apply as Kafka Engineer

Check out the best jobs for September 2022here

Job description

A US-based company that is home to the world's largest selection of guitars and musical equipment is looking for a Kafka Engineer. The engineer will play a critical role in modernizing the platform’s current infrastructure while building modern, robust, and scalable features. The company operates the world’s largest multichannel musical instrument retail services and is on a mission to develop and nurture lifelong musicians and make a difference in the musical world. They have managed to raise more than $30mn in funding so far. 

  

Job Responsibilities:

  • Help facilitate the implementation of Confluent Kafka streaming and enhance the middleware administration
  • Responsible for setting up Kafka brokers, Kafka Mirror Makers, and Kafka Zookeeper on hosts in collaboration with the Infrastructure team
  • Design, build and maintains Kafka topics 
  • Contribute to the tuning and architecture with a strong understanding of related Kafka Connect and Linux fundamentals 
  • Carefully observe Kafka health metrics and alerts, taking action in a timely manner
  • Implement a real-time and batch data input pipeline employing best practices in data modeling and ETL/ELT operations
  • Participate in technological decisions and work with smart colleagues
  • Review code, implementations, and provide useful input to assist others in developing better solutions
  • Develop documentation on design, architecture, and solutions
  • Provide assistance and coaching to peers and more junior engineers
  • Build good working relationships at all levels of the organization and across functional teams
  • Assume accountability for the project's timetables and deliverables
  • Create dataflows and pipelines ranging from simple to complicated
  • Support the investigation and resolution of production difficulties
  • Work to keep the system and data security at a high level, ensuring that the application's confidentiality, integrity, and availability are not jeopardized
  • Express stakeholder's needs into familiar language that can be adopted for use with Behavior Driven Development (BDD) or Test-Driven Development (TDD)
  • Build solutions that are stable, scalable, and easy to use while fitting into the broader data architecture
  • Assists in the formation of Communities of Practice
  • Utilize industry-standard approaches to continuously improve the performance of source code
  • Consistently enhance the performance of source code using industry-standard methodologies
  • Steer the technology direction and options by proferring suggestions based on experience and research
  • Encourages the creation of group norms and procedures

Job Requirements:

  • Bachelor’s/Master’s degree in Engineering, Computer Science (or equivalent experience)
  • 7+ years of direct expertise with data pipelines and application integrations
  • Experience in the design, development of Clusters, and Producers/Consumers 
  • Proficiency in enabling Cloud/hybrid Cloud using Confluent Kafka Data streaming through Kafka, SQS/SNS queuing, etc
  • Strong container expertise, especially Docker 
  • Prolific skills working with technologies such as Ansible, Puppet, Terraform, OpenShift, Kubernetes, AWS, AWS Lambda, and Event Streaming
  • Working experience in a public cloud environment as well as on-premise infrastructure
  • DataDog, Splunk, KSQL, Spark, and PySpark experience is a plus
  • Excellent knowledge of distributed architectures, including Microservices, SOA, RESTful APIs, and data integration architectures 
  • Familiarity with any of the following message/file formats: Parquet, Avro, ORC
  • Excellent understanding of AWS Cloud Data Lake technologies, including Kinesis/Kafka, S3, Glue, and Athena
  • It's advantageous to know RabbitMQ and Tibco Messaging technologies
  • Previous expertise in designing and implementing data models for applications, operations, or analytics
  • Track record of working with information repositories, data modelling, and business analytics tools is a strong suit
  • Be familiar with databases, data lakes, and schemas with advanced expertise and experience in online transactional (OLTP) and analytical processing (OLAP)
  • Experience in Streaming Service, EMS, MQ, Java, XSD, File Adapter, and ESB-based application design and development experience
  • Capable of working in a fast-paced team to keep the data and reporting pipeline running smoothly

Interested in this job?

Apply to Turing today.

Apply now

How to become a Turing developer?

Work with the best software companies in just 4 easy steps
  1. Create your profile

    Fill in your basic details - Name, location, skills, salary, & experience.

  2. Take our tests and interviews

    Solve questions and appear for technical interview.

  3. Receive job offers

    Get matched with the best US and Silicon Valley companies.

  4. Start working on your dream job

    Once you join Turing, you’ll never have to apply for another job.

Leadership

In a nutshell, Turing aims to make the world flat for opportunity. Turing is the brainchild of serial A.I. entrepreneurs Jonathan and Vijay, whose previous successfully-acquired AI firm was powered by exceptional remote talent. Also part of Turing’s band of innovators are high-profile investors, such as Facebook's first CTO (Adam D'Angelo), executives from Google, Amazon, Twitter, and Foundation Capital.

Equal Opportunity Policy

Turing is an equal opportunity employer. Turing prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, age, disability status, protected veteran status, or any other characteristic protected by law.

Work full-time at top U.S. companies

Create your profile, pass Turing Tests and get job offers as early as 2 weeks.

Apply now

Apply now