Browse
···
Log in / Register
Data Engineer
S/1,000-1,500/month
Indeed
Full-time
Onsite
No experience limit
No degree limit
79Q22222+22
Favourites
Share
Some content was automatically translatedView Original
Description

**Who are we?** Kashio is a company that develops financial technology to help businesses adopt digital payments and automate their financial operations for accounts receivable and accounts payable. We serve over 400 companies in sectors such as education, finance, property management, and utilities, with the goal of expanding into five markets within the next two years. We stand out by offering innovative, secure, and intuitive solutions, driven by a passionate team advancing financial inclusion and operational efficiency. **Who are we looking for?** We are seeking a **Data Engineer** with experience in the fintech ecosystem and in building data pipelines. This role will be key in leading our data strategy, ensuring quality and efficiency in our data ingestion and transformation processes. **What will you do?** **Technical Leadership and Data Strategy:** * Define, design, and maintain robust and scalable data pipelines for ingestion, transformation, and loading (ETL/ELT) using best practices and modern patterns. * Establish coding standards, modular design, and best practices for data process development. * Research, evaluate, and implement new tools and technologies (on\-premise or cloud) to improve the efficiency, reliability, and traceability of data flows. * Lead pipeline integration into CI/CD workflows and promote a "DataOps" culture within the team. **Development and Maintenance of Data Solutions:** * Design, implement, and maintain data flows using tools such as Apache Airflow, DBT, Spark, and cloud services (Dataflow, Glue, Synapse). * Create reusable and metadata-configurable ETL/ELT scripts and processes. * Optimize queries and data structures in data warehouses like BigQuery, Redshift, or Azure Synapse. * Ensure data quality through automated validations, continuous testing, and proactive alerts. **Mentorship and Team Development:** * Mentor other engineers (data, QA, backend) on data design patterns, efficient transformation, and tools such as DBT or Airflow. * Promote best practices in data engineering within the development team. * Facilitate internal training sessions on cloud tools, modern ETL practices, and monitoring tools. **Collaboration and Process:** * Collaborate closely with development, DevOps, and product teams to understand data needs from early development stages. * Actively participate in agile ceremonies (planning, dailies, retros) as a key technical member. * Ensure data traceability from source to consumption layer (lineage). **Data Metrics and Quality:** * Define and monitor key metrics such as: pipeline latency, processed volume, data quality, failure rate, SLA/SLO. * Set up observability dashboards for critical data flows. * Report errors, anomalies, or bottlenecks clearly and with prioritization. **Job Requirements:** **Required Education:** * Bachelor's degree in Systems Engineering, Computer Science, or related field. A master’s degree in Data Engineering or Big Data, Data Science, Analytics, or Artificial Intelligence is desirable. **Work Experience:** * Minimum of 5 years of experience in Data Engineering. * Minimum of 3 years designing and operating production data pipelines. * Experience working in agile, product-oriented environments. * Solid knowledge of data modeling, partitioning, and governance. **Mandatory Technical Skills:** * Programming in Python, SQL, and Spark (advanced). * Data orchestration frameworks (Airflow, Prefect). * Distributed processing (Apache Spark, PySpark). * Experience with relational and NoSQL databases (PostgreSQL, MongoDB, Redshift). **Required Competencies:** * Effective communication skills. * Leadership and mentorship ability. * Strong analytical and problem-solving skills. * Proactivity and autonomy. **Working Conditions:** * **Contract:** Fixed-term with possibility of renewal. * **Modality:** Remote. * **Schedule:** Monday to Friday, 9 am to 6 pm (Peru time). * **Benefits:** Opportunity to lead data strategy in a growing fintech. Dynamic, collaborative, and innovative work environment. Professional development and continuous learning opportunities. 100% covered EPS health insurance. Possibility to participate in company stock options. Job type: Full-time Salary: S/.5,500\.00 \- S/.7,000\.00 per month Application question(s): * The working hours are Monday to Friday from 09:00 am to 06:00 pm (Peruvian time), and the salary range is USD 1000 to 1500\. Are you agreeable? * Can you describe a project where you designed and maintained data pipelines? What were the main challenges you faced and how did you overcome them? * How do you ensure data quality during ingestion and transformation processes? What tools or methodologies have you used to implement validations and testing? * What experience do you have implementing data orchestration tools such as Apache Airflow or Prefect? How have you used them to improve workflow efficiency? * In your opinion, what are the best practices in data engineering and how have you applied them in your previous projects? Work location: Remote job

Source:  indeed View original post
María García
Indeed · HR

Company

Indeed
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.