Are you passionate about handling Big Data? Do you want to be work on fetching, processing and storing large amounts of data using state-of-the-art technologies? Do you have a strong programming background and a data-driven thinking? Palowise is looking for a candidate like you!
Your responsibilities:
Work on and enhance our Big Data infrastructure and services
Join a team of experts from various fields to provide better end-user solutions
Take full responsibility of the data processing pipeline, including data import, cleaning, storing and processing of large amounts of data
Share knowledge and be a critical bridging actor among the various teams in Palowise
Follow best practices and keep up with emerging technologies/industry trends
Actively contribute to taking Data Engineering at Palowise to the next level
Skills & Requirements:
Requirements:
Knowledge of the following is essential:
B.Sc. or higher in Computer science/engineering
Great programming skills and deep knowledge of software development principles
Production experience with Cloud infrastructure, preferably with Azure
Extensive experience with ELK stack (Elasticsearch 8 & Kibana)
Deep knowledge of Python
Deep knowledge of Relational Databases and/or NoSQL systems
Extensive experience working with Linux
Team spirit and willingness to learn from as well as mentor others
Desired Skills:
Knowledge of the following will be an advantage:
Experience with infrastructure as a code (IaaC) tools, preferably with Terrafrom
Experience with Amazon Web Services (AWS)
Experience with NodeJS (AngularJS or Typescript)
Experience with any reporting ΒΙ tool (PowerBI)
Experience in Orchestration and/or ETL frameworks (e.g. Airflow)
Basic understanding of Machine Learning & Statistical Modelling
We offer:
- Opportunity to work with large amounts of data using state-of-the-art systems
- Competitive salary
- Performance-based bonus
- Hybrid working model, office location at the Athenian Trigono
- Training budget
Why Palowise?
Apply for this Job