Hard Rock Digital: Senior Data DevOps Engineer
Unknown
Description
Headquarters: Florida, United States
Job description
What are we building?
Hard Rock Digital is a team focused on becoming the best online sportsbook, casino, and social gaming company in the world. We’re building a team that resonates with passion for learning, operating and building new products and technologies for millions of consumers. We care about each customer's interaction, experience, behavior, and insight and strive to ensure we’re always acting authentically.
Rooted in the kindred spirits of Hard Rock and the Seminole Tribe of Florida, the new Hard Rock Digital taps a brand known the world over as the leader in gaming, entertainment, and hospitality. We’re taking that foundation of success and bringing it to the digital space — ready to join us?
What’s the position?
We are seeking an experienced Senior Data DevOps Engineer who excels at architecting scalable data infrastructure, leading technical initiatives, and mentoring engineering teams.
You'll work as a senior member of the Data DevOps team, driving strategic technical decisions while collaborating with Data Science, Machine Learning, Reporting, and other data-related teams to deploy and support cutting-edge data applications. This high-impact role will position you as a technical leader, shaping the future of our data infrastructure and establishing best practices across the organization.
What will you do?
As a Senior Data DevOps Engineer, you will:
Architect and lead the design of complex, enterprise-scale data pipelines using Airflow, DBT, and Databricks.
Define and implement strategies for pipeline performance optimization to support real-time and batch processing at scale.
Lead the design and optimization of AWS-based data infrastructure, including S3, Lambda, and Snowflake architecture
Establish and enforce best practices for cost-efficient, secure, and scalable data processing across the organization.
Design and optimize AWS SageMaker environments for ML teams, ensuring optimal performance and resource utilization.
Lead cross-functional collaboration with ML, Data Science, and Reporting teams to establish data strategy and ensure seamless data accessibility.
Design and implement comprehensive data pipeline monitoring, alerting, and logging frameworks to proactively detect failures and performance bottlenecks.
Architect automation solutions for data quality, lineage tracking, and schema evolution management.
Lead incident response efforts, performing complex troubleshooting and root cause analysis for critical data issues.
Champion and evolve Data DevOps best practices, driving automation, reproducibility, and scalability across the organization.
Mentor junior and mid-level engineers, conducting code reviews and providing technical guidance.
Establish technical standards, document complex infrastructure patterns, data workflows, and operational procedures.
Evaluate and recommend new technologies and tools to improve data infrastructure and workflows.
Job requirements
What are we looking for?
We are looking for a seasoned Senior Data DevOps Engineer with a proven track record of leading technical initiatives, architecting enterprise-scale data infrastructure, and mentoring engineering teams. You should have extensive experience designing robust data pipelines
Tags
Apply for this Position
About Unknown
Company hiring for SK Stones USA: Customer Success Manager
Job Stats
Hiring Across Borders?
Interview Prep Guide
Preparation Strategy
To prepare for this role, focus on reviewing data pipeline architecture, Airflow, DBT, and AWS services such as S3, Lambda, and Snowflake. Practice designing and optimizing data pipelines, and be prepared to discuss trade-offs and best practices. Additionally, review system design principles, data infrastructure, and cloud services such as AWS SageMaker. Practice designing and optimizing systems, and be prepared to discuss trade-offs and best practices. Finally, review your past experiences and be prepared to discuss your leadership and collaboration skills, as well as your approach to problem-solving and incident response.
Likely Interview Rounds
- 1. Technical~60 min
What to prep: Review data pipeline architecture, Airflow, DBT, and AWS services such as S3, Lambda, and Snowflake. Practice designing and optimizing data pipelines, and be prepared to discuss trade-offs and best practices.
- How would you design a scalable data pipeline using Airflow and DBT?
- What strategies would you use to optimize pipeline performance for real-time and batch processing?
- Can you describe your experience with AWS-based data infrastructure, including S3, Lambda, and Snowflake architecture?
- 2. System design~60 min
What to prep: Review system design principles, data infrastructure, and cloud services such as AWS SageMaker. Practice designing and optimizing systems, and be prepared to discuss trade-offs and best practices.
- How would you design a data infrastructure to support real-time and batch processing at scale?
- Can you describe your experience with designing and optimizing AWS SageMaker environments for ML teams?
- How would you establish and enforce best practices for cost-efficient, secure, and scalable data processing across the organization?
- 3. Behavioral~60 min
What to prep: Review your past experiences and be prepared to discuss your leadership and collaboration skills, as well as your approach to problem-solving and incident response.
- Can you describe a time when you led a cross-functional team to establish a data strategy?
- How do you ensure seamless data accessibility across different teams and stakeholders?
- Can you describe your experience with incident response efforts, performing complex troubleshooting and root cause analysis for critical data issues?
Most Likely Questions
- What is your experience with data pipeline architecture and optimization?
- Can you describe your experience with cloud services such as AWS?
- How do you ensure data quality and lineage tracking in your data pipelines?
- Can you describe your experience with data infrastructure design and optimization?
- How do you approach incident response efforts and root cause analysis for critical data issues?
Common Pitfalls
- Lack of experience with data pipeline architecture and optimization
- Insufficient knowledge of cloud services such as AWS
- Inability to design and optimize data infrastructure
- Poor problem-solving and incident response skills
Free Prep Resources
- • Apache Airflow
- • DBT
- • AWS
- • S3
- • Lambda
- • Snowflake
- • AWS SageMaker