196 Ofertas de Data Engineer en Argentina
Data Engineer - Data Pipelines & Modeling
Hoy
Trabajo visto
Descripción Del Trabajo
This position is only for professionals based in Argentina or Uruguay
We're looking for a data engineer for one of our clients' team. You will help enhance and scale the data transformation and modeling layer. This role will focus on building robust, maintainable pipelines using dbt, Snowflake, and Airflow to support analytics and downstream applications. You’ll work closely with the data, analytics, and software engineering teams to create scalable data models, improve pipeline orchestration, and ensure trusted, high-quality data delivery.
Key Responsibilities:
- Design, implement, and optimize data pipelines that extract, transform, and load data into Snowflake from multiple sources using Airflow and AWS services
- Build modular, well-documented dbt models with strong test coverage to serve business reporting, lifecycle marketing, and experimentation use cases
- Partner with analytics and business stakeholders to define source-to-target transformations and implement them in dbt
- Maintain and improve our orchestration layer ( Airflow/Astronomer ) to ensure reliability, visibility, and efficient dependency management
- Collaborate on data model design best practices, including dimensional modeling, naming conventions, and versioning strategies
Core Skills & Experience:
- dbt: Hands-on experience developing dbt models at scale, including use of macros, snapshots, testing frameworks, and documentation. Familiarity with dbt Cloud or CLI workflows
- Snowflake: Strong SQL skills and understanding of Snowflake architecture, including query performance tuning, cost optimization, and use of semi-structured data
- Airflow: Solid experience managing Airflow DAGs, scheduling jobs, and implementing retry logic and failure handling; familiarity with Astronomer is a plus
- Data Modeling: Proficient in dimensional modeling and building reusable data marts that support analytics and operational use cases
- AWS (Nice to Have): Familiarity with AWS services such as DMS, Kinesis, and Firehose for ingesting and transforming data
- Segment (Nice to Have): Familiarity with event data and related flows, piping data in and out of Segment
#J-18808-LjbffrData Engineer – Data Pipelines & Modeling
Publicado hace 11 días
Trabajo visto
Descripción Del Trabajo
This position is only for professionals based in Argentina or Uruguay
We're looking for a data engineer for one of our clients' team. You will help enhance and scale the data transformation and modeling layer. This role will focus on building robust, maintainable pipelines using dbt, Snowflake, and Airflow to support analytics and downstream applications. You’ll work closely with the data, analytics, and software engineering teams to create scalable data models, improve pipeline orchestration, and ensure trusted, high-quality data delivery.
Key Responsibilities:
- Design, implement, and optimize data pipelines that extract, transform, and load data into Snowflake from multiple sources using Airflow and AWS services
- Build modular, well-documented dbt models with strong test coverage to serve business reporting, lifecycle marketing, and experimentation use cases
- Partner with analytics and business stakeholders to define source-to-target transformations and implement them in dbt
- Maintain and improve our orchestration layer (Airflow/Astronomer ) to ensure reliability, visibility, and efficient dependency management
- Collaborate on data model design best practices, including dimensional modeling, naming conventions, and versioning strategies
Core Skills & Experience:
- dbt: Hands-on experience developing dbt models at scale, including use of macros, snapshots, testing frameworks, and documentation. Familiarity with dbt Cloud or CLI workflows
- Snowflake: Strong SQL skills and understanding of Snowflake architecture, including query performance tuning, cost optimization, and use of semi-structured data
- Airflow: Solid experience managing Airflow DAGs, scheduling jobs, and implementing retry logic and failure handling; familiarity with Astronomer is a plus
- Data Modeling: Proficient in dimensional modeling and building reusable data marts that support analytics and operational use cases
- AWS (Nice to Have): Familiarity with AWS services such as DMS, Kinesis, and Firehose for ingesting and transforming data
- Segment (Nice to Have): Familiarity with event data and related flows, piping data in and out of Segment
#J-18808-LjbffrData Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
Press Tab to Move to Skip to Content Link
Select how often (in days) to receive an alert:
About us
At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for.
The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies.
We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more aboutour What and our Why and how we canwork together .
Our Global Business Center (GBC) in Buenos Aires was established in 2004 and our diverse and multinational workforce of more than 2500 colleagues have been powering ExxonMobil’s global businesses ever since. We are proud to be part of an inclusive, safe, and vibrant work environment.
The IT department is seeking new candidates to its dynamic team. The department provides support worldwide to the Organizations all along the value chain from exploration thru site development to production.
- As a FinOps Engineer specializing in cloud cost optimization you will be responsible for managing and reducing cloud expenses while ensuring efficient use of cloud resources. Your responsibilities will include designing, maintaining, and optimizing ETL/ELT processes to facilitate data driven decisions to maintain a cost effective cloud infrastructure.
- Our office is in Puerto Madero, Buenos Aires and you will interact with cloud engineers, developers, and users from all around the world.
- #LI-Onsite
- Monitor and analyze cloud usage and spending to identify cost-saving opportunities.
- Ensure that cloud infrastructure is optimized for both performance and cost.
- Identify unused or underused resources and recommend more efficient usage.
- Implement and maintain tools and processes to track cloud spending and usage.
- Work closely with development, finance, and procurement teams to develop budgets and purchase plans.
- Create clear and concise reports on cost optimization progress for management.
- Develop and govern cost optimization policies and best practices.
- Train team members on best practices for cost-efficient cloud usage.
- Stay updated on industry trends and continuously improve cost optimization strategies.
- Bachelor's degree in: Software Engineering, Systems Engineering, Computer Engineering, Computer Science, Electronic Engineering, Information Analysis or similar.
- Advanced English skills, verbal and written communication
- Experience with scripting/programming languages like PowerShell , Python , SQL, Azure Data Factory, or to automate cost management tasks.
- Commitment to Continuous Learning: demonstrate a proactive attitude towards continuous professional development and learning.
- Self-starter and self-motivated
- Interpersonal and Negotiation skills - ability to effectively interact with staff, peers, and management
- Planning and Coordination skills - ability to effectively multi-task and progress many work items in parallel
- Strong technical writing and communication skills (document and communicate design concepts and conduct trainings)
- Proficiency in AWS and/or Azure, including their cost management tools (e.g., AWS Cost Explorer, Azure Cost Management).
- Strong skills in data analysis and visualization tools and concepts (e.g., Excel, Power BI, Tableau) to interpret cloud usage and cost data.
- Knowledge of cloud cost-saving strategies such as savings plans, reserved instances, spot instances, and rightsizing resources.
- Understanding of cloud architecture principles to design cost-efficient solutions.
An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance, and life.
We offer you:
- Competitive health coverage
- 3-week vacation up to 5 years of service plus 1 personal day
- Online training tools
- Gym discounts and activities for sport and general well-being
- A solid ergonomic program
- Medical assistance available in the offices
- Equipped maternity rooms
- Among others
More information on our Company’s benefits can be found here .
Please note benefits may be changed from time to time without notice, subject to applicable law.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, sexual orientation, gender identity, national origin, citizenship status, protected veteran status, genetic information, or physical or mental disability.
Alternate Location:
Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship.
Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships.
Job Segment: Sustainability, Cloud, Technical Writer, Software Engineer, Computer Science, Energy, Technology, Engineering
Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
We're hiring a highly motivated Data Engineer with expertise in Python, AWS Glue/pyspark, AWS Lambda, Kafka, and relational databases (specifically Postgres). You'll be responsible for designing, developing, and maintaining data processing and management solutions, ensuring data integrity and availability through collaboration with multidisciplinary teams.
Responsibilities:
- Design and develop efficient data pipelines using Python, AWS Glue/pyspark, AWS Lambda, Kafka, and related technologies.
- Implement and optimize ETL processes for data extraction, transformation, and loading into relational databases, especially Postgres.
- Collaborate on data warehouse architectures, ensuring proper data modeling, storage, and access.
- Utilize tools like StitchData and Apache Hudi for data integration and incremental management, improving efficiency and enabling complex operations.
- Identify and resolve data quality and consistency issues, implementing monitoring processes across pipelines and storage systems.
Requirements:
- Strong Python skills and proficiency in associated libraries for data processing and manipulation.
- Expertise in AWS Glue/pyspark, AWS Lambda, and Kafka for ETL workflow development and streaming architecture.
- Experience in designing and implementing relational databases, specifically Postgres.
- Practical knowledge of data pipeline development, ETL processes, and data warehouses.
- Familiarity with data integration tools like StitchData and Apache Hudi for efficient incremental data management.
- Advanced level of English for effective communication and collaboration.
Benefits:
- Competitive salary.
- Annual offsite team trip.
- Learn from a very high performing team.
- Training from US mentors.
If you're a passionate Data Engineer with experience in these technologies and seek a stimulating and challenging environment, join our team and contribute to the success of our advanced data processing and management solutions!
#J-18808-LjbffrData Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
Remote, Latam, Full Time, Individual Contributor, +2 years of experience
Who We Are
At Yuno, we are building the payment infrastructure that enables all companies to participate in the global market. Founded by a team of seasoned experts in the payments and IT industries, Yuno provides a high-performance payment orchestrator. Our technology offers companies access to leading payment capabilities, allowing them to engage customers confidently and maintain global business operations with seamless payment integrations worldwide.
Shape your future with Yuno!
We are Orchestrating the best high-performing team!
If you’re a Data Engineer, specialized in ETL's and you enjoy solving complex problems with code, developing and you are not afraid of learning new things, we are looking for you.
As a SR Data Engineer, you will be part of the team delivering the different parts of our production-ready product, while co-designing and implementing an architecture that can scale up with the product and the company.
? Your challenge at Yuno
• Implement any type of extraction ( Manuals, mails, SFTP, and Apis)
• Build solutions for application integrations, task automation and any relevant data automation using proven design patterns.
• Design and build data processing pipelines for large volumes of data that are performant and scalable.
• Build and maintain the infrastructure required for extraction, loading, transformation and storage of data from multiple data sources using custom scripted.
• Collaborate with the team to develop and maintain a robust and scalable data infrastructure to support our data needs.
• Implement and enforce data governance policies and best practices to ensure data quality, security, and compliance.
• Manage and optimize data warehousing solutions for efficient storage and retrieval of data.
• Develop and maintain data lake solutions for storing and managing diverse data types.
• Use big data technologies and frameworks to process and analyze large datasets efficiently.
• Work with distributed data systems and technologies to handle high volumes of data.
• Integrate data from multiple sources.
? Skills you need
Minimum Qualifications
• Proven experience as a Data Engineer or similar role in a data-intensive environment.
• Strong proficiency in Python and SQL.
• Knowledge of data infrastructure design and management.
• Familiarity with data governance principles and practices.
• Experience with ETL processes and tools.
• Proficiency in working with Data Warehouses or Data Lakes.
• Familiarity with big data technologies and distributed data systems.
• Ability to integrate data from multiple sources.
• Knowledge of Spark is a plus.
• Verbal and written English fluency.
? What we offer at Yuno
• Competitive Compensation
• Remote work - You can work from everywhere!
• Home Office Bonus - We offer a one time allowance to help you create your ideal home office.
• Work equipment
• Stock options
• Health Plan wherever you are
• Flexible Days off
• Language, Professional and Personal growth courses
#J-18808-LjbffrData Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
¡Esta propuesta te puede interesar!
- Gains a thorough understanding of the requirements and ensure that work product aligns with customer requirements.
- Works within the established development guidelines, standards, methodologies, and naming conventions.
- Builds processes to ingest, process, and store massive amounts of data.
- Assists with optimizing the performance of big data ecosystems.
- Wrangles data in support of data science projects.
- Performs productionization of ML and statistical models for Data Scientists & Statisticians.
- Constructs, tests, and maintains scalable data solutions for structured and unstructured data to support reporting, analytics, ML, and AI.
- Assists with research and building of proof of concepts to test out theories recommended by Senior and Lead Data Engineers.
- Collaborates and contributes in identifying project risks, designing mitigation plans, and developing estimates.
- Contributes to the design and development of data pipelines and feature engineering of data solutions.
- Established processes or methods are still relied on; however, Data Engineers will be required to come up with creative solutions to problems. Senior teammates are still needed to provide oversight on solutions to complex problems.
Requirements:
- Experience: Minimum of 5 years of proven experience as a Data Engineer or in a similar role.
- Robust Technical Skills in:
- Microsoft Data Factory: Solid experience in designing and building data pipelines.
- Azure Data Lake: Deep understanding of storing and managing data at scale.
- Azure: Hands-on experience within the Microsoft Azure data services ecosystem.
- Databricks: Proficiency in using the Databricks platform for data processing and solution development.
- Python: Advanced proficiency in Python for data engineering, scripting, and application development.
- SQL: Advanced experience with SQL for manipulating, querying, and optimizing relational databases.
- Problem-Solving Mindset: Demonstrated ability to approach and solve complex challenges with a solution-oriented focus.
- Inquisitive Mindset & Continuous Learning: A constant desire to explore new technologies, methodologies, and enhance skills.
- Excellent Communication Skills: Ability to communicate clearly and effectively with technical team members and non-technical stakeholders.
- Ability to work both independently and as part of a collaborative team.
Si consideras que reunís los requisitos para el puesto y te gustan los desafíos, no lo dudes…. ¡Postulate!
Sobre Randstad
En Randstad nos moviliza ayudar a las personas y a las organizaciones a desarrollar todo su potencial. Ese es el compromiso que asumimos como compañía en todo el mundo, un compromiso que nos impulsa a ir más allá para lograr que nuestros clientes y candidatos alcancen el éxito. ¿Cómo lo hacemos?, combinando nuestra pasión por las personas con el poder de la tecnología, creando experiencias más humanas, que nos permitan ser una fuente de inspiración y apoyo para quienes nos eligen. Porque estamos convencidos de que mejores personas hacen mejores.
Nos esforzamos todos los días en crear un entorno diverso y nos enorgullece ser una empresa con igualdad de oportunidades para todas las personas, independientemente de su raza, color, religión, sexo, identidad sexual u orientación sexual, país de origen, genética, discapacidad o edad.
educación
Universitario completo
#J-18808-LjbffrData Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
Ryz Labs is looking for a Data Engineer to help enhance and stabilize the data monitoring system of one of our biggest clients. This role will focus on building robust, scalable, and proactive alerting workflows to ensure high data quality and reliability across our pipelines. You’ll work closely with our data engineering team to improve orchestration and monitoring across key tools like Snowflake, dbt, Airflow, and AWS services.organization.
Key Responsibilities:
- Design and implement a resilient data monitoring framework that improves incident detection and reduces false positives.
- Enhance alerting mechanisms for pipeline failures, data anomalies, and data freshness across Snowflake and dbt models.
- Streamline orchestration and dependency management in Airflow to increase reliability and improve observability.
- Integrate AWS DMS, Kinesis, and Firehose monitoring into our centralized alerting system.
- Collaborate with stakeholders to define SLAs and implement meaningful metrics for data health.
- Document monitoring architecture and ensure knowledge transfer to internal teams.
Qualifications:
- Snowflake: Strong understanding of Snowflake architecture, performance tuning, and metadata management.
- dbt: Experience creating and maintaining dbt models with test coverage and clear documentation.
- Airflow: Expertise in managing DAGs, handling retries, and structuring tasks for reliability and visibility. Experience with Astronomer is a bonus.
- AWS: Deep familiarity with DMS (Database Migration Service), Kinesis, and Firehose, especially around monitoring and alerting integrations.
- Monitoring & Alerting: Experience with data monitoring tools or custom observability frameworks.
About RYZ Labs:
RYZ Labs is a startup studio built in 2021 by two lifelong entrepreneurs. The founders of RYZ have worked at some of the world's largest tech companies and some of the most iconic consumer brands. They have lived and worked in Argentina for many years and have decades of experience in Latam. What brought them together is the passion for the early phases of company creation and the idea of attracting the brightest talents in order to build industry-defining companies in a post-pandemic world.
Our teams are remote and distributed throughout the US and Latam. They use the latest cutting-edge technologies in cloud computing to create applications that are scalable and resilient. We aim to provide diverse product solutions for different industries, planning to build a large number of startups in the upcoming years.
At RYZ, you will find yourself working with autonomy and efficiency, owning every step of your development. We provide an environment of opportunities, learning, growth, expansion and challenging projects. You will deepen your experience while sharing and learning from a team of great professionals and specialists.
Our values and what to expect:
- Customer First Mentality - every decision we make should be made through the lens of the customer.
- Bias for Action - urgency is critical, expect that the timeline to get something done is accelerated.
- Ownership - step up if you see an opportunity to help, even if not your core responsibility.
- Humility and Respect - be willing to learn, be vulnerable, and treat everyone that interacts with RYZ with respect.
- Frugality - being frugal and cost-conscious helps us do more with less.
- Deliver Impact - get things done in the most efficient way.
- Raise our Standards - always be looking to improve our processes, our team, our expectations. Status quo is not good enough and never should be.
#J-18808-LjbffrSé el primero en saberlo
Acerca de lo último Data engineer Empleos en Argentina !
Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
Solvd is an AI-first advisory and digital engineering firm delivering measurable business impact through strategic digital transformation. Taking an AI-first approach, we bridge the critical gap between experimentation and real ROI, weaving artificial intelligence into everything we do and helping clients at all stages accelerate AI integration into each process layer. Our mission is to empower passionate people to thrive in the era of AI while maintaining rigorous ethical AI standards. We’re supported by a global team with offices in the USA, Poland, Ukraine and Georgia.
We are looking for a Data Engineer to develop an AI-powered data mapping recommendation platform to speed up the integration and validation of complex datasets. The system will automate data extraction, mapping, and validation processes that currently require extensive manual effort due to inconsistencies in source data, reliance on domain-specific code mappings, and heuristic-based validation.
Responsibilities- Build and maintain scalable data pipelines with Databricks, Spark, and PySpark.
- Manage data governance, security, and credentials using Unity Catalog and Secret Scopes.
- Develop and deploy ML models with MLflow; work with LLMs and embedding-based vector search.
- Apply ML/DL techniques (classification, regression, clustering, transformers) and evaluate using industry metrics.
- Design data models and warehouses leveraging dbt, Delta Lake, and Medallion architecture.
- Work with healthcare data standards and medical terminology mapping.
- Databricks Expertise - Candidates must demonstrate strong hands-on experience with the Databricks platform, including:
- Unity Catalog : Managing data governance, access control, and auditing across workspaces.
- Secret Scopes : Secure handling of credentials and sensitive configurations.
- Apache Spark / PySpark : Writing performant, scalable distributed data pipelines.
- MLflow : Managing ML lifecycle including experiment tracking, model registry, and deployment.
- Vector Search : Working with vector databases or search APIs to build embedding-based retrieval systems.
- LLMs (Large Language Models) : Familiarity with using or fine-tuning LLMs in Databricks or similar environments.
- Data Engineering Skills
- Experience designing and maintaining robust data pipelines:
- Data Modeling & Warehousing : Dimensional modeling, star/snowflake schemas, SCD (Slowly Changing Dimensions).
- Modern Data Stack : Familiarity with dbt , Delta Lake, and the Medallion architecture (Bronze, Silver, Gold layers).
- Machine Learning Knowledge (Nice to Have)
- Strong foundation in machine learning is expected, including:
- Traditional Machine Learning Techniques : Classification, regression, clustering, etc.
- Model Evaluation & Metrics : Precision, recall, F1-score, ROC-AUC, etc.
- Deep Learning (DL) : Understanding of neural networks and relevant frameworks.
- Transformers & Attention Mechanisms : Knowledge of modern NLP architectures and their applications.
- Preferred Domain Knowledge (Nice to Have)
- Experience with healthcare data standards and medical code systems such as eCQM , VSAC , RxNorm , LOINC , SNOMED , etc.
- Understanding of medical terminology and how to map or normalize disparate coding systems.
- Platforms & Tools: Databricks, Unity Catalog, Secret Scopes, MLflow
- Languages & Frameworks: Python, PySpark, Apache Spark
- Machine Learning & AI: Traditional ML techniques, Deep Learning, Transformers, Attention Mechanisms, LLMs
- Search & Retrieval: Vector databases, embedding-based vector search
- Data Engineering & Modeling: dbt, Delta Lake, Medallion architecture (Bronze/Silver/Gold), Dimensional modeling, Star/Snowflake schemas
- Domain (Optional): Healthcare data standards (eCQM, VSAC, RxNorm, LOINC, SNOMED)
Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
Join Our Data Products and Machine Learning Development Remote Startup!
Mutt Data is a dynamic startup committed to crafting innovative systems using cutting-edge Big Data and Machine Learning technologies.
We’re looking for a Data Engineer to help take our expertise to the next level. If you consider yourself a data nerd like us, we’d love to connect!
What We Do- Leveraging our expertise, we build modern Machine Learning systems for demand planning and budget forecasting.
- Developing scalable data infrastructures, we enhance high-level decision-making, tailored to each client.
- Offering comprehensive Data Engineering and custom AI solutions, we optimize cloud-based systems.
- Using Generative AI, we help e-commerce platforms and retailers create higher-quality ads, faster.
- Building deep learning models, we enhance visual recognition and automation for various industries, improving product categorization, quality control, and information retrieval.
- Developing recommendation models, we personalize user experiences in e-commerce, streaming, and digital platforms, driving engagement and conversions.
- Amazon Web Services
- Google Cloud
- Astronomer
- Databricks
- Kaszek
- Product Minds
- H2O.ai
- Soda
- We are Data Nerds
- We are Open Team Players
- We Take Ownership
- We Have a Positive Mindset
Curious about what we’re up to? Check out our case studies and dive into our blog post to learn more about our culture and the exciting projects we’re working on!
Responsibilities- Collaborate with the team to define goals and deliver custom data solutions.
- Innovate with new tools to improve Mutt Data's infrastructure and processes.
- Design and implement ETL processes, optimize queries, and automate pipelines.
- Own projects end-to-end—build, maintain, and improve data systems while working with clients.
- Develop tools for the team and assist in tech migrations and model design.
- Build scalable, high-performance data architectures.
- Focus on code quality—review, document, test, and integrate CI/CD.
- Experience in Data Engineering, including building and optimizing data pipelines.
- Strong knowledge of SQL and Python (Pandas, Numpy, Jupyter).
- Experience working with any cloud (AWS, GCP, Azure).
- Basic knowledge of Docker.
- Experience with orchestration tools like Airflow or Prefect.
- Familiarity with ETL processes and automation.
- Experience with stream processing tools like Kafka Streams, Kinesis, or Spark.
- Solid command of English for understanding and communicating technical concepts (Design Documents, etc.).
- 20% of your salary in USD
- Remote-first culture – work from anywhere!
- Gympass or sports club stipend to stay active.
- AWS & Databricks certifications fully covered (reward salary increase for AWS certifications!).
- Food credits via Pedidos Ya – because great work deserves great food.
- Birthday off + an extra vacation week (Mutt Week! ️)
- Referral bonuses – help us grow the team & get rewarded!
- – an unforgettable getaway with the team!
Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
At Emi Labs we are on a mission to increase Frontline Workers’ access to professional opportunities.
This is a 2.7 billion population that accounts for 80% of the world’s workforce. They are digitally invisible, as there’s little to no data available on who they are, their career history, or their skill set, limiting their access to professional opportunities and growth. We're here to transform this by building the infrastructure to make Frontline Workers visible.
Our first step to achieving our mission is to transform the recruiting experience into an easy, human and fair process, for both candidates and companies with high-volume job openings.
Emi, our main product, is an A.I. recruitment assistant that enables companies to engage in a conversation with each applicant to detect interested and qualified individuals while saving Recruiters a huge amount of time by automating tasks such as screening, validating skills, scheduling interviews, and collecting documents.
We were part of Y-Combinator 's Winter 2019 batch and in 2022 we have raised an $11M funding round co-led by Merus Capital and Khosla Ventures .
About the role
We’re looking for a Data Engineer to join our data team and help us empower decision-making with high-quality data and insightful analysis. You will work closely with stakeholders across teams to design and build scalable data solutions.
Responsibilities- Data Infrastructure: Design, develop, and maintain scalable data pipelines and infrastructure to support real-time analytics, data-driven decision-making, and machine learning models.
- Collaboration: Work closely with product, engineering, and leadership teams to ensure data-driven strategies are incorporated into product development and operational processes.
- Data Governance: Establish and maintain robust data governance practices to ensure data quality, integrity, and compliance with relevant regulations.
- 5+ years of experience working in analytics or data engineering roles.
- Strong technical skills in data engineering, data architecture, and analytics.
- Proficiency in SQL, Python, R, cloud platforms (AWS, GCP, Azure), DBT, Airflow, and data visualization tools.
- Development: testing, CI/CD, repository management, shell scripting, terminal usage, etc.
- Proven ability to communicate complex analytical concepts in a clear and accessible way.
- Proactive, curious mindset with a passion for using data to solve real business problems.
- Airbyte, Cosmos.
- AWS: RDS, Redshift, EKS, S3, ECR, IAM, Lambda, Route 53, SSM.
- Infrastructure: Cost management, networking, virtualization, Kubernetes, Karpenter, Grafana.
- Concepts of FinOps and DataOps.
- Competitive salary: Salaries paid in USD.
- Stock Options: Stock Options Package as part of your compensation package.
- Flexible remote-first work culture. We work towards goals*.
- ️ Vacations: 3 weeks of vacation.
- Holiday season: Week off between Christmas and New Year's Eve.
- Physical Wellness program: We have partnered up with Gympass, a well-being platform that offers the best coverage of top gyms, studios, and activities for you to choose from.
- English Classes: Improve your English skills with our in-company teachers.
- Internal library: Get all the free books - digital, physical - you like, anytime.
Emi Labs is committed to fostering a fair, inclusive, and equal work environment. We believe diversity is crucial to building the best team and solving Frontline Worker's access to professional opportunities, that is why Emi aims to be a leader in workplace equality and move both our company and the industry forward.
Emi is a very dynamic and new startup where growth opportunities are there for the taking! We are just building a team with great impact so now is the best time to jump aboard!
Interested in knowing more about Emi Labs?
- LinkedIn profile:
- Web Page:
#LI-Remote
#J-18808-Ljbffr