Senior Data Engineer

About RENEWCAST

Founded in 2020, RENEWCAST builds precision wind & solar forecasting software using cutting-edge ML to deliver accurate, scalable, and cost-efficient predictions. 
We value clarity, alignment, and smooth collaboration across technical and business teams — everyone understands not just what we build, but why.
Seniors are hands-on contributors who help raise engineering standards across the company.

Your Role

Own and evolve the data backbone that powers our forecasts. Youʼll join a growing engineering team where seniors help shape standards and mentor others while remaining hands-on. You will design, orchestrate, and optimize reliable pipelines with strong emphasis on performance, cost awareness, observability, and reproducibility. Youʼll work cross-functionally with Data Science, Meteorology, MLOps, Client Data Ingestion, and DevOps, and as the team scales you may grow into a discipline-lead role.

What You Will Do

  • Own end-to-end data workflows: from raw weather & production datathrough curated, production-ready features; 

  • Build and orchestrate pipelines using modern systems (Airflow v3 preferred, Dagster, Prefect); 

  • Develop scalable batch ELT across multiple data sources and schedules;

  • Support real-time ingestion when required; 

  • Implement and maintain tensor-first data pipelines (xarray, Zarr, Dask) alongside classic ETL;

  • Improve observability: metrics, logs, p50/p95 latency tracking; cost/latency trade-off tuning;

  • Build resilience: restartable/idempotent workflows, safe backfills, schema evolution tracking, lineage;

  • Mentor junior engineers and contribute to developing team-wide best practices as the team grows;

  • Work cross-functionally to align data features, architecture, and SLAs.


Must-Have Skills
  • 5-7+ years in Data Engineering (senior-level ownership);

  • Strong orchestration experience — ideally asset/event-driven (Airflow v3, agster, Prefect);

  • Cloud data engineering (Azure/AWS/GCP) , CI/CD, containerization, version/config management for complex workflows;

  • Proven track record building & optimizing scalable batch ELT/ETL workflows with distributed processing;

  • Observability and performance experience (metrics, alerts, cost tuning);

  • Resilient pipeline design: fault tolerance, reruns, schema evolution, lineage tools.



Bonus Points
  • Tensorial stack experience: xarray, ZARR, Dask; scientific formats (NetCDF, GRIB);

  • Kubernetes/Terraform familiarity;

  • Data governance/metadata tooling;

  • Kafka/Pulsar or other streaming systems;

  • Familiarity with modern product-oriented delivery methods (e.g., ShapeUp);

  • Familiarity with Databricks. 


How we work

We prize autonomy, ownership, and crisp communication. Priorities and decisions are transparent so teams move in sync, not in silos.

 
What Success Looks Like

Clear, reliable data flows that are easy to understand and iterate on. Faster, more cost-efficient pipelines with strong monitoring/alerting. A resilient orchestration layer and reproducible environments.
Stronger engineering capability in the team thanks to your mentorship and contributions to shared standards.

 
What We Offer
  • High-impact scope with visibility to leadership;

  • Competitive compensation;

  • Flexible hybrid work; offices in Tallinn and Rome;

  • Fast growth into a discipline lead role as the team scales. 

 
How to Apply

Send your resume directly to recruiting@renewcast.com

or Click below to apply.


 
Apply