




Job Summary: Design, build, and operate scalable and resilient cloud-based data platforms for advanced analytics, BI, and AI, ensuring data quality, governance, and efficiency. Key Highlights: 1. Design and development of robust and scalable data pipelines 2. Implementation of event-driven architectures and ETL/ELT optimization 3. Collaboration with BI, Data Science, and business teams **Role Purpose:** Design, build, and operate scalable and resilient data platforms in cloud environments (primarily Azure), enabling reliable availability of both batch and real-time data for advanced analytics, business intelligence (BI), and artificial intelligence models. This role is critical to transforming data into strategic assets, ensuring quality, governance, and efficiency across the entire data lifecycle. **Key Responsibilities:** * Design and develop robust and scalable data pipelines (batch and streaming). * Implement event\-driven architectures for real-time ingestion. * Build and optimize high-performance ETL/ELT processes. * Manage storage solutions in Data Lake and Data Warehouse. * Optimize cloud resource usage, focusing on cost efficiency and performance. * Implement data quality controls, governance, security, and lineage. * Integrate multiple data sources: APIs, databases, legacy systems, events, and IoT. * Automate deployments using CI/CD practices (DataOps). * Monitor production pipelines, manage incidents, and ensure high availability. * Collaborate with BI, Data Science, and business teams to enable analytical use cases. **Technology Stack:** Cloud \& Platform * Microsoft Azure (Data Factory, Synapse, Data Lake, Event Hub, etc.) Processing \& Data * Databricks * Apache Spark (PySpark / Scala) * Apache Kafka (desirable) Languages * Python (advanced) * SQL (advanced) DevOps \& Automation * CI/CD pipelines (Azure DevOps, GitHub Actions or similar) Core Technical Knowledge * Data modeling (OLTP, OLAP, dimensional models) * Modern data architectures (Lakehouse, Data Mesh – desirable) * Large-scale data processing (Big Data) * Partitioning and query optimization techniques * Handling formats: Parquet, Avro, JSON, ORC * Data security and governance (RBAC, policies, encryption) * Streaming vs. batch data handling Key Competencies * Analytical thinking and results orientation * Architectural design capability * Engineering best practices (clean code, testing, versioning) * Focus on efficiency and cost optimization * Collaborative work with multidisciplinary teams Nice to Have (Differentiators) * Experience with large-scale real-time architectures * Knowledge of DataOps / MLOps * Experience in advanced analytics or AI projects * Azure certifications (e.g., Data Engineer Associate) Salary: Starting from S/.1\.00 per month Application Question(s): * What is your expected salary? Work Location: On-site employment


