···
Log in / Register
DATA ENGINEERING
S/4,500-6,000/year
Indeed
Full-time
Onsite
No experience limit
No degree limit
Lambayeque 284, Lima 15093, Peru
Favourites
Share
Some content was automatically translatedView Original
Description

Job Summary: We are seeking a Data Engineer with experience in agile methodologies, large-scale data analysis, ELT pipeline development, and in-depth knowledge of Azure and AWS ecosystems. Key Highlights: 1. Experience with agile methodologies and multidisciplinary teams. 2. Expertise in Azure and AWS ecosystems for data. 3. Development of ELT/ETL data pipelines and infrastructure-as-code management. Possess knowledge and experience working with agile methodologies such as SCRUM alongside multidisciplinary teams. ✓ Experience with various business intelligence and data visualization tools such as Power BI, and programming languages for analyzing and synthesizing complex information, such as R and Python. ✓ Handling large volumes of data from diverse sources—structured (e.g., relational databases: MySQL, PostgreSQL, SQL Server, Oracle, Informix, Teradata), semi-structured (e.g., JSON and XML files obtained via APIs), and NoSQL databases such as MongoDB. ✓ Expertise in developing ELT/ETL data pipelines for diverse data sources—both structured and semi-structured—using programming languages such as Python/PySpark, PL/SQL, and T\-SQL, for loading into Datamarts, DWHs, Data Lakes, and Lakehouses. ✓ Expertise and knowledge of the Azure ecosystem, specializing in Azure Data Factory for pipeline orchestration, Azure Databricks for implementing PySpark notebooks for data transformation, Azure Event Hubs for real-time messaging systems, Azure Stream Analytics and Azure Data Explorer for real-time analytics, Azure DevOps for version control and CI/CD, Azure Synapse Analytics for large-scale structured data analysis, Azure SQL for structured data, and Azure Data Lake Store for raw data storage. ✓ Expertise and knowledge of the AWS ecosystem, specializing in Amazon Glue for ETL/ELT pipeline orchestration and creation, Amazon CodeCommit for version control, Amazon Athena for querying data on Amazon S3, Amazon S3 for raw data storage, Amazon Redshift for DWH construction, Amazon DynamoDB for non-relational data storage, Amazon Step Functions for orchestration, and Amazon CloudWatch for monitoring and logging. ✓ Strong experience implementing CI/CD pipelines to automate delivery and deployment to production environments, using version control services such as GitLab. ✓ Expertise in infrastructure-as-code (IaC) management, using Terraform and AWS CloudFormation to efficiently and scalably provision and maintain cloud environments. ✓ Additionally, knowledge and experience with orchestration tools such as Apache Airflow and API development in Python using the FastAPI framework. ANALYTICAL PROGRAMS AND TOOLS ✓ Advanced Microsoft Office: Word, PowerPoint, Excel (VBA macros) ✓ Statistical Software: R\-Studio, SPSS ✓ BI Software: Microsoft Power BI ✓ Programming Languages: SQL, R, Python, Java ✓ Database Management Systems: MySQL, PostgreSQL, Oracle, Informix, SQL Server, Teradata ✓ NoSQL Databases: MongoDB ✓ Big Data Technologies: Impala, Hadoop, Apache Spark, BigQuery, Databricks, Apache Kafka ✓ Version Control: Git, GitLab, Bitbucket ✓ Project Planning Software: Jira ✓ Orchestration Software: ADF, Apache Airflow, Amazon Glue ✓ Operating Systems: Windows Server, Linux SPECIALIZATION: Computer Engineer, Systems Engineer, Software Engineer. MINIMUM REQUIRED EXPERIENCE: \+ 5 years minimum of solid and demonstrable experience in Oracle / AWS integrations, within data projects in banking, retail, or insurance sectors Employment Type: Temporary Salary: S/.4,500\.00 \- S/.6,000\.00 per month Work Location: Remote

Source:  indeed View original post
María García
Indeed · HR

Company

Indeed
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.