Data Engineer
Pivotal Health
Description
About Pivotal Health
Pivotal Health is the leading technology platform that helps healthcare providers get paid fairly in an increasingly complex reimbursement landscape.
Today, many providers face persistent underpayment from health insurance companies, despite delivering high-quality care. While processes like IDR (Independent Dispute Resolution) were designed to promote fairness, theyâre often administrative-heavy, time-consuming, and difficult to navigate without the right tools.
Pivotal Health combines software, data, and service into a seamlessly integrated, AI-driven platform that simplifies these complex reimbursement workflows. We help providers efficiently dispute underpaid claims, reduce administrative burden, and recover the reimbursement theyâre entitled to; without adding more work to already stretched teams.
Our full-service IDR solution is just the starting point. Weâre building solutions that enable providers to operate with clarity, control, and confidence across the reimbursement journey.
About the Role
We're hiring a Data Engineer to sit at the intersection of our analytics and engineering teams. You'll be responsible for making Pivotal's product data accessible, reliable, and ready for analysis, connecting data sources to our warehouse, building clean transformation pipelines, and ensuring our analysts have what they need to drive business decisions.
This is not a traditional software engineering role and it's not a pure analyst role either. You'll bring a strong technical foundation and apply it in service of business outcomes: faster reporting, better data access, and more reliable pipelines that the team can actually trust.
If you enjoy building the infrastructure that makes great analysis possible and care about the business impact of your work, this role is for you.
What Youâll Do
Own the pipeline from product database to analytics warehouse: Take full ownership of extracting data from our PostgreSQL product database and loading it into BigQuery. Design and maintain the ETL processes that make this happen reliably, with the right structure for downstream analytics use.
Bring in new data sources: Expand our analytics footprint by integrating new data sources, including third-party tools like Salesforce, into our warehouse. You'll partner with our DevOps team to establish the right service accounts, permissions, and connection patterns to do this securely and correctly.
Build and maintain analytics-ready tables: Use dbt to design, build, and manage the transformation layer that turns raw data into clean, well-structured tables. You'll have real ownership over what the data looks like: what gets modeled, how it's shaped, and what makes it most useful for reporting.
Support reporting and business insights: Work alongside our analysts to support the reporting layer, ensuring data is fresh, accurate, and structured in a way that makes building dashboards and reports in Tableau, Power BI, or Metabase reliable and efficient.
Be the bridge between analytics and engineering: Attend engineering team meetings to stay ahead of product changes that could affect analytics. Serve as the connective tissue between both teams, translating data needs into technical solutions and keeping everyone aligned.
Who You Are
Strong SQL skills with hands-on experience in modern cloud data warehouses: BigQuery, Snowflake, or Redshift
Proficient with dbt for managing SQL transformations. You understand how to write clean, maintainable, well-documented models
Comfortable with Python at a working level, enough to build and automate data workflows without needing to be a full software engineer
Experience with at least one BI or reporting tool (Tableau, Power BI, Metabase, or similar)
You think in business outcomes: your resume reflects the impact your work had, not just the tools you used
Self-directed and comfortable with ambiguity: you can identify what needs to be done and execute without heavy guidance
Collaborative by nature: you know how to work across teams with different levels of technical depth
Tags
Apply for this Position
About Pivotal Health
Company scraped from remoteok
Job Stats
Hiring Across Borders?
Interview Prep Guide
Preparation Strategy
To prepare for this role, focus on reviewing data engineering concepts, practicing SQL and data modeling, and preparing examples of your experience working with data engineering teams. Be ready to discuss your problem-solving skills and ability to communicate technical concepts to non-technical stakeholders. Additionally, review the company's technology stack and be prepared to discuss how you can contribute to the team's goals and objectives.
Likely Interview Rounds
- 1. Technical~60 min
What to prep: Review data engineering concepts, practice SQL and data modeling, and be prepared to discuss ETL pipelines and data warehouse optimization
- How do you handle data inconsistencies in a pipeline?
- Can you describe your experience with ETL processes?
- How do you optimize data warehouse performance?
- What is your approach to data modeling and database design?
- 2. Behavioral~60 min
What to prep: Prepare examples of your experience working with data engineering teams, and be ready to discuss your problem-solving skills and ability to communicate technical concepts to non-technical stakeholders
- Can you describe a time when you had to work with a cross-functional team to resolve a data issue?
- How do you handle conflicting priorities and tight deadlines in a data engineering role?
- Tell me about a project where you had to design and implement a data pipeline from scratch
- How do you stay current with new technologies and trends in data engineering?
Most Likely Questions
- What is your experience with BigQuery and PostgreSQL?
- How do you handle data security and access control in a data warehouse?
- Can you describe your experience with dbt and data transformation?
- How do you approach data quality and data validation in a pipeline?
Common Pitfalls
- Lack of experience with cloud-based data warehouses
- Inadequate understanding of data security and access control
- Insufficient knowledge of data modeling and database design principles
Free Prep Resources
- • LeetCode
- • System Design Primer (GitHub: donnemartin)
- • dbt documentation
- • BigQuery documentation