5.365 Ofertas de Data Engineer en Argentina

Big Data Software Engineer

Mar del Plata, Buenos Aires $120000 - $200000 Y JPMorganChase

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Job Description
As a Software Engineer III at JPMorgan Chase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives.

The JPMorgan Chase Commercial & Investment Bank is undertaking a strategic, initiative called Client 360 aimed at developing a big data platform and Firmwide solution for Entity Resolution and Relationships. We are seeking a Big Data Software Engineer with skills and experience implementing large-scale, cloud platform processing internal and 3rd party data. This individual will work on groundbreaking work to implement new solutions for Client 360 - Entity Resolution and Relationships and enhance the existing platform.

Job Responsibilities

  • Acquire and manage data from primary and secondary data sources
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Transform existing ETL logic on AWS and Databricks
  • Innovate new ways of managing, transforming and validating data
  • Implement new or enhance services and scripts (in both object-oriented and functional programming)
  • Establish and enforce guidelines to ensure consistency, quality and completeness of data assets
  • Apply quality assurance best practices to all work products
  • Analyze, design and implement business-related solutions and core architectural changes using Agile programming methodologies with a development team
  • Become comfortable with learning cutting edge technology stacks and applications to greenfield projects

Qualifications

  • Proficiency in advanced Python programming, with extensive experience in utilizing libraries such as Pandas and NumPy.
  • Experience in code and infrastructure for Big Data technologies (e.g. Spark, Kafka, Databricks etc.) and implementing complex ETL transformations
  • Experience with AWS services including EC2, EMR, ASG, Lambda, EKS, RDS and others
  • Experience developing APIs leveraging different back-end data stores (RDS, Graph, Dynamo, etc.)
  • Experience in writing efficient SQL queries
  • Strong understanding of linear algebra, statistics, and algorithms.
  • Strong Experience with UNIX shell scripting to automate file preparation and database loads
  • Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues
  • Familiarity with relational database environment (Oracle, SQL Server, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Strong development discipline and adherence to best practices and standards.

Preferred Qualifications, Capabilities And Skills

  • Experience in Data Science, Machine Learning and AI is a plus
  • Financial Services and Commercial banking experience is a plus
  • Familiarity with NoSQL platforms (MongoDB, AWS Open Search) is a plus

ABOUT US
JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world's most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.

We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process.

We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.

JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans

About The Team
Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. You'll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.

Lo sentimos, este trabajo no está disponible en su región

Big Data Software Engineer - Python

Chubut, Chubut JPMorganChase

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Overview

As a Software Engineer III at JPMorgan Chase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Responsibilities
  • Acquire and manage data from primary and secondary data sources
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Transform existing ETL logic on AWS and Databricks
  • Innovate new ways of managing, transforming and validating data
  • Implement new or enhance services and scripts (in both object-oriented and functional programming)
  • Establish and enforce guidelines to ensure consistency, quality and completeness of data assets
  • Apply quality assurance best practices to all work products
  • Analyze, design and implement business-related solutions and core architectural changes using Agile programming methodologies with a development team
  • Become comfortable with learning cutting edge technology stacks and applications to greenfield projects
Qualifications
  • Proficiency in advanced Python programming, with extensive experience in utilizing libraries such as Pandas and NumPy
  • Experience in code and infrastructure for Big Data technologies (e.g. Spark, Kafka, Databricks etc.) and implementing complex ETL transformations
  • Experience with AWS services including EC2, EMR, ASG, Lambda, EKS, RDS and others
  • Experience developing APIs leveraging different back-end data stores (RDS, Graph, Dynamo, etc.)
  • Experience in writing efficient SQL queries
  • Strong understanding of linear algebra, statistics, and algorithms
  • Strong experience with UNIX shell scripting to automate file preparation and database loads
  • Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues
  • Familiarity with relational database environments (Oracle, SQL Server, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Strong development discipline and adherence to best practices and standards
Preferred Qualifications, Capabilities And Skills
  • Experience in Data Science, Machine Learning and AI is a plus
  • Financial Services and Commercial banking experience is a plus
  • Familiarity with NoSQL platforms (MongoDB, AWS Open Search) is a plus
About Us

JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.

We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process.

We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.

JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans

About The Team

Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. You’ll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Buenos Aires, Buenos Aires Emi Labs, Inc.

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Overview

At Emi Labs we are on a mission to increase Frontline Workers’ access to professional opportunities. This is a 2.7 billion population that accounts for 80% of the world’s workforce. They are digitally invisible, as there’s little to no data available on who they are, their career history, or their skill set, limiting their access to professional opportunities and growth. We’re here to transform this by building the infrastructure to make Frontline Workers visible. Our first step to achieving our mission is to transform the recruiting experience into an easy, human and fair process, for both candidates and companies with high-volume job openings. Emi, our main product, is an A.I. recruitment assistant that enables companies to engage in a conversation with each applicant to detect interested and qualified individuals while saving Recruiters a huge amount of time by automating tasks such as screening, validating skills, scheduling interviews, and collecting documents. We were part of Y-Combinator's Winter 2019 batch and in 2022 we have raised an $11M funding round co-led by Merus Capital and Khosla Ventures.

About the role

We’re looking for a Data Engineer to join our data team and help us empower decision-making with high-quality data and insightful analysis. You will work closely with stakeholders across teams to design and build scalable data solutions.

Responsibilities
  • Data Infrastructure: Design, develop, and maintain scalable data pipelines and infrastructure to support real-time analytics, data-driven decision-making, and machine learning models.
  • Collaboration: Work closely with product, engineering, and leadership teams to ensure data-driven strategies are incorporated into product development and operational processes.
  • Data Governance: Establish and maintain robust data governance practices to ensure data quality, integrity, and compliance with relevant regulations.
What we're looking for
  • 5+ years of experience working in analytics or data engineering roles.
  • Strong technical skills in data engineering, data architecture, and analytics.
  • Proficiency in SQL, Python, R, cloud platforms (AWS, GCP, Azure), DBT, Airflow, and data visualization tools.
  • Development: testing, CI/CD, repository management, shell scripting, terminal usage, etc.
  • Proven ability to communicate complex analytical concepts in a clear and accessible way.
  • Proactive, curious mindset with a passion for using data to solve real business problems.
Nice to have
  • Airbyte, Cosmos.
  • AWS: RDS, Redshift, EKS, S3, ECR, IAM, Lambda, Route 53, SSM.
  • Infrastructure: Cost management, networking, virtualization, Kubernetes, Karpenter, Grafana.
  • Concepts of FinOps and DataOps.
What we offer
  • Competitive salary: Salaries paid in USD.
  • Stock Options: Stock Options Package as part of your compensation package.
  • Flexible remote-first work culture. We work towards goals*.
  • 3 weeks of vacation.
  • Holiday season: Week off between Christmas and New Year's Eve.
  • Physical Wellness program: Partnered with Gympass for access to gyms, studios, and activities.
  • English Classes: In-company teachers to help improve English skills.
  • Internal library: Access to free books - digital and physical - anytime.

Emi Labs is committed to fostering a fair, inclusive, and equal work environment. We believe diversity is crucial to building the best team and solving Frontline Workers' access to professional opportunities, which is why Emi aims to be a leader in workplace equality and move both our company and the industry forward.

Emi is a very dynamic and new startup where growth opportunities are there for the taking. We are building a team with great impact, so now is the best time to jump aboard!

Interested in knowing more about Emi Labs?

- LinkedIn profile:

- Web Page:

#LI-Remote

#J-18808-Ljbffr

Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Buenos Aires, Buenos Aires Assurant, Inc.

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

  • Gains a thorough understanding of the requirements and ensure that work product aligns with customer requirements
  • Works within the established development guidelines, standards, methodologies, and naming conventions
  • Builds processes to ingest, process and store massive amount of data
  • Assists with optimization the performance of bigdata ecosystems
  • Wrangles data in support of data science projects
  • Performs productionization of ML and statistical models for Data Scientists & Statisticians
  • Constructs, tests and maintains scalable data solutions for structured and unstructured data to support reporting, analytics, ML and AI
  • Assists with research and building of proof of concepts to test out theories recommended by Senior and Lead Data Engineers
  • Collaborates and contributes in identifying project risks, design mitigation plans, develops estimates
  • Contributes to the design, development of data-pipelines, and feature engineering of data solutions
  • Established processes or methods are still relied on however Data Engineer will be required to come up with creative solutions to problems. Senior teammates are still needed to provide oversight on solutions to complex problems.

    Requirements:

    • Experience: Minimum of 5 years of proven experience as a Data Engineer or in a similar role.
    • Robust Technical Skills in:
      • Microsoft Data Factory: Solid experience in designing and building data pipelines.
      • Azure Data Lake: Deep understanding of storing and managing data at scale.
      • Azure: Hands-on experience within the Microsoft Azure data services ecosystem.
      • Databricks: Proficiency in using the Databricks platform for data processing and solution development.
      • Python: Advanced proficiency in Python for data engineering, scripting, and application development.
      • SQL: Advanced experience with SQL for manipulating, querying, and optimizing relational databases.
    • Problem-Solving Mindset: Demonstrated ability to approach and solve complex challenges with a solution-oriented focus.
    • Inquisitive Mindset & Continuous Learning: A constant desire to explore new technologies, methodologies, and enhance skills.
    • Excellent Communication Skills: Ability to communicate clearly and effectively with technical team members and non-technical stakeholders.
    • Ability to work both independently and as part of a collaborative team.
    About Us We work with the world’s top brands to make smart devices simpler. Vehicles last longer. Homes more secure. Problems easier to solve. And we volunteer in communities all over the globe to help the world become a greener, better place. We come from a variety of countries, cultures, and backgrounds. But we’re united by our enduring values of common sense, common decency, uncommon thinking, and uncommon results. So connect with us. Bring us your best work and your brightest ideas. And we’ll bring you a place where you can thrive. #J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Mutt Data

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Join Our Data Products and Machine Learning Development Remote Startup!

Mutt Data is a dynamic startup committed to crafting innovative systems using cutting-edge Big Data and Machine Learning technologies.

We’re looking for a Data Engineer to help take our expertise to the next level. If you consider yourself a data nerd like us, we’d love to connect!

What We Do
  • Leveraging our expertise, we build modern Machine Learning systems for demand planning and budget forecasting.
  • Developing scalable data infrastructures, we enhance high-level decision-making, tailored to each client.
  • Offering comprehensive Data Engineering and custom AI solutions, we optimize cloud-based systems.
  • Using Generative AI, we help e-commerce platforms and retailers create higher-quality ads, faster.
  • Building deep learning models, we enhance visual recognition and automation for various industries, improving product categorization, quality control, and information retrieval.
  • Developing recommendation models, we personalize user experiences in e-commerce, streaming, and digital platforms, driving engagement and conversions.
Our Partnerships
  • Amazon Web Services
  • Google Cloud
  • Astronomer
  • Databricks
  • Kaszek
  • Product Minds
  • H2O.ai
  • Soda
Our Values
  • We are Data Nerds
  • We are Open Team Players
  • We Take Ownership
  • We Have a Positive Mindset

Curious about what we’re up to? Check out our case studies and dive into our blog post to learn more about our culture and the exciting projects we’re working on!

Responsibilities
  • Collaborate with the team to define goals and deliver custom data solutions.
  • Innovate with new tools to improve Mutt Data's infrastructure and processes.
  • Design and implement ETL processes, optimize queries, and automate pipelines.
  • Own projects end-to-end—build, maintain, and improve data systems while working with clients.
  • Develop tools for the team and assist in tech migrations and model design.
  • Build scalable, high-performance data architectures.
  • Focus on code quality—review, document, test, and integrate CI/CD.
Required Skills
  • Experience in Data Engineering, including building and optimizing data pipelines.
  • Strong knowledge of SQL and Python (Pandas, Numpy, Jupyter).
  • Experience working with any cloud (AWS, GCP, Azure).
  • Basic knowledge of Docker.
  • Experience with orchestration tools like Airflow or Prefect.
  • Familiarity with ETL processes and automation.
Nice to Have Skills
  • Experience with stream processing tools like Kafka Streams, Kinesis, or Spark.
  • Solid command of English for understanding and communicating technical concepts (Design Documents, etc.).
Perks
  • 20% of your salary in USD
  • Remote-first culture – work from anywhere!
  • Gympass or sports club stipend to stay active.
  • AWS & Databricks certifications fully covered (reward salary increase for AWS certifications!).
  • Food credits via Pedidos Ya – because great work deserves great food.
  • Birthday off + an extra vacation week (Mutt Week! ️)
  • Referral bonuses – help us grow the team & get rewarded!
  • – an unforgettable getaway with the team!
#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Financecolombia

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

We're hiring a highly motivated Data Engineer with expertise in Python, AWS Glue/pyspark, AWS Lambda, Kafka, and relational databases (specifically Postgres). You'll be responsible for designing, developing, and maintaining data processing and management solutions, ensuring data integrity and availability through collaboration with multidisciplinary teams.

Responsibilities:

  • Design and develop efficient data pipelines using Python, AWS Glue/pyspark, AWS Lambda, Kafka, and related technologies.
  • Implement and optimize ETL processes for data extraction, transformation, and loading into relational databases, especially Postgres.
  • Collaborate on data warehouse architectures, ensuring proper data modeling, storage, and access.
  • Utilize tools like StitchData and Apache Hudi for data integration and incremental management, improving efficiency and enabling complex operations.
  • Identify and resolve data quality and consistency issues, implementing monitoring processes across pipelines and storage systems.

Requirements:

  • Strong Python skills and proficiency in associated libraries for data processing and manipulation.
  • Expertise in AWS Glue/pyspark, AWS Lambda, and Kafka for ETL workflow development and streaming architecture.
  • Experience in designing and implementing relational databases, specifically Postgres.
  • Practical knowledge of data pipeline development, ETL processes, and data warehouses.
  • Familiarity with data integration tools like StitchData and Apache Hudi for efficient incremental data management.
  • Advanced level of English for effective communication and collaboration.

Benefits:

  • Competitive salary.
  • Annual offsite team trip.
  • Learn from a very high performing team.
  • Training from US mentors.

If you're a passionate Data Engineer with experience in these technologies and seek a stimulating and challenging environment, join our team and contribute to the success of our advanced data processing and management solutions!

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Buenos Aires, Buenos Aires AlixPartners GmbH

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Client Services - Partnerships and Platforms - Experienced Professional

At AlixPartners, we solve the most complex and critical challenges by moving quickly from analysis to action when it really matters; creating value that has a lasting impact on companies, their people, and the communities they serve. By understanding, respecting, and honoring the needs of our employees, clients, and communities, AlixPartners actively promotes an inclusive environment. We strongly believe in the value that diversity brings to our experiences and are committed to the perpetual enhancements of initiatives, policies, and practices. We hold ourselves accountable by providing the space for authenticity, growth, and equity for everyone.

AlixPartners has embraced a hybrid work model to provide flexibility and support our employees’ work-life integration. Our hybrid model combines a mix of in-person (at client site or AlixPartners office) and remote working. Travel is part of this position, but the frequency may vary based on client, team, and individual circumstances.

What you’ll do

The Data Engineer is a self-directed engineer passionate about data engineering, architecture, and data sets to enable use case realization at scale. This person can identify and work with data across a variety of formats and make the data available to an equally varied set of customers: consultants, data scientists, AI/ML services, and web-based applications. The engineer fundamentally understands data ingestion techniques and the engineering necessary to push those frameworks to maximize the value of data. This person is comfortable working with, deploying, and managing data solutions using cloud-based technologies.

The Data Engineer is a Full Time Employee role located in Buenos Aires.

Outcomes:

  • Design, build, and maintain sustainable data pipelines for firm-wide data sets, supporting discovery and consumption
  • Ensure data access for users, applications, and services to make informed decisions and business case optimizations
  • Identify critical gaps and develop ETL processes
  • Be a domain expert in stored data and use cases

This description is not designed to encompass a comprehensive listing of required activities, duties or responsibilities.

What you’ll need
  • Bachelor’s degree in computer science or related field
  • 8+ years of experience working in Data Enginering roles.
  • 4+ years of experience with RDBMS (e.g. Postgres, MySQL, Oracle)
  • 2+ years of experience building data solutions on Azure and other data / analytical platforms
  • 3+ years of experience architecting database topologies across a wide array of formats
  • 2+ years of experience working with complex schemas and managing views into data
  • Strong knowledge and hands-on experience with data and data processes quality and its fit for purpose
  • Medium knowledge of why/when to use non-SQL, columnar, data sharding solutions
  • Medium knowledge and hands-on experience w/ benchmarking and performance tuning data solutions
  • Medium knowledge and hands-on experience writing high-performance data queries and associated database designs
  • Low working knowledge of at least one NoSQL database
  • Low working knowledge of Restful API design
  • Strong problem-solving skills with the ability to craft innovative solutions
  • Excellent communication, written and verbal, in English. Proficiency in other languages is a plus.
  • Strong interpersonal skills, with the ability to effectively collaborate with diverse stakeholders and influence decision-making; adept at translating complex technical details to diverse audiences
  • Desire to actively engage in geographically dispersed teams
  • Ability to work full time in an office and remote environment; physically able to sit/stand at a computer and work in front of a computer screen for significant portions of the workday
  • Must become familiar with, and promote and abide by, our Core Values as defined by the AlixPartners’Code of Conduct and foster an inclusive environment with people at all levels of an organization
  • Understands the implications of design decisions across datasets
  • Grasps sparse data sets, storage mechanics, and analytical application requirements
  • Manages data lifecycles (archiving, access, cost) and communicates these concerns to stakeholders
  • Collaborates across the organization to understand needs and ensures data remains meaningful
  • Pays meticulous attention to data management details like data growth rate, monitoring, query performance for common data use cases and builds insights to anticipate challenges before they happen
  • Exhibits a strong sense of quality and attention to detail

This description is not designed to encompass a comprehensive listing of required activities, duties or responsibilities.AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges. You can learn more about ourCulture and Career Development opportunities ,

The firm offers market-leading benefits that provide flexible options to support our employee’s needs including health benefits to help prioritise their physical and emotional well-being, time-off policies to help to recharge and financial/ retirement benefits that offer income protection and support long-term planning.

The benefit type and level differ per location.

AlixPartners is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to, among other things, race, colour, religion, sex, sexual orientation, gender identity, national origin, age, veteran status, or disability.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región
Sé el primero en saberlo

Acerca de lo último Data engineer Empleos en Argentina !

Data Engineer

RYZ Labs

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Overview

Ryz Labs is looking for a Senior Data Engineer to join one of our clients in designing, building, and maintaining scalable data solutions that power a cutting-edge cloud and edge-based IoT platform. In this key role, you’ll help shape the data infrastructure that drives real-time analytics, automation, and intelligent insights across industries. Working alongside a team of talented engineers, you’ll deliver high-performance systems—from ingesting edge device data to optimizing complex cloud pipelines on AWS.

Qualifications
  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 3+ years of experience in software or data engineering within multidisciplinary teams.
  • Proficiency in SQL and scripting languages such as Python or JavaScript.
  • Experience with Linux, Docker, Git, and Terraform.
  • Strong analytical, communication, and problem-solving skills, with a proactive and ownership-driven mindset.
You will
  • Design, build, and maintain secure and scalable data pipelines using AWS services (Glue, Lambda, Athena, Timestream, S3) and edge technologies (MQTT, AWS IoT Greengrass).
  • Implement and manage Infrastructure as Code (IaC) using Terraform.
  • Write and optimize complex SQL queries for data transformation and validation.
  • Ensure observability and security through tools like CloudWatch and Grafana, following best practices for IAM, encryption, and access control.
  • Collaborate cross-functionally with software, DevOps, and product teams to deliver comprehensive and reliable data solutions.
About RYZ Labs

RYZ Labs is a startup studio founded in 2021 by two lifelong entrepreneurs. The founders of RYZ have worked at some of the world's largest tech companies and some of the most iconic consumer brands. They have lived and worked in Argentina for many years and have decades of experience in Latam. What brought them together was their passion for the early phases of company creation and the idea of attracting the brightest talents in order to build industry-defining companies in a post-pandemic world.

Our teams are remote and distributed throughout the US and Latam. They use the latest cutting-edge cloud computing technologies to create scalable and resilient applications. We aim to provide diverse product solutions for different industries and plan to build a large number of startups in the upcoming years.

At RYZ, you will find yourself working with autonomy and efficiency, owning every step of your development. We provide an environment of opportunities, learning, growth, expansion, and challenging projects. You will deepen your experience while sharing and learning from a team of great professionals and specialists.

Our values
  • Customer First Mentality - Every decision we make should be made through the lens of the customer.
  • Bias for Action - urgency is critical, expect that the timeline to get something done is accelerated.
  • Ownership - Step up if you see an opportunity to help, even if it's not your core responsibility.
  • Humility and Respect - Be willing to learn, be vulnerable, and treat everyone who interacts with RYZ with respect.
  • Frugality - being frugal and cost-conscious helps us do more with less
  • Deliver Impact - get things done most efficiently.
  • Raise our Standards - always be looking to improve our processes, our team, and our expectations. The status quo is not good enough and never should be.
#J-18808-Ljbffr

Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Buenos Aires, Buenos Aires Chevron Corporation

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Data Engineer page is loaded# Data Engineerlocations: Buenos Aires, Buenos Aires, Argentinatime type: Full timeposted on: Posted Todayjob requisition id: R Total Number of Openings1**Chevron’s Business Support Center (BASSC), located in Buenos Aires, is accepting applications for the position of Data Engineer.** Successful candidates will join the IT Organization, which is part of a multifunction service center with a workforce of more than 1800 employees that deliver business services and solutions to the corporation across the globe.The Data Engineer will be responsible for designing, setting up, and building integration jobs to move and store data from existing Systems of Record (SoR) into and through the Chevron Microsoft Cloud environment (Azure Data Lake, Azure SQL, Azure Data Warehouse), and into other sources.This includes ETL development from on-premises databases, data transformation using **Databricks** (**Python, Spark**, Scala, SQL), and orchestration of workflows via Azure Data Factory.The candidate will follow Chevron’s standard integration patterns, tools, and languages, and will also create schemas and build Operational Controls (OC) for teams to manage SoRs once created. Responsibilities include ensuring alignment with vendors and third parties, managing code versioning via Azure DevOps, and optimizing data flow processes.**Responsibilities for this position may include but are not limited to:*** Understand the business use of data and stakeholder requirements to support strategic business objectives.* Collaborate with delivery teams to provide data managementdirection and support for initiatives and product development.* Contribute to the design of common information models.* Consult on the appropriate data integration patterns, data modeling and data quality.* Maintain and share knowledge of requirements, key data types and data definitions, data stores, and data creation process.**Required qualifications*** Minimum 3 years of experience with data analysis/modeling, data acquisition/ingestion, data cleaning, and data engineering/pipeline.* Importing data via APIs, ODBC, Azure Data Factory, or Azure Data Bricks from various systems of records like Azure Blob, SQL DBs, No SQL DBs, Data Lake, Data Warehouse, etc.* Experience with azure data bricks.* Data transformation using Python, Spark, Scala and SQL in Databricks.* Data Pipeline Development.* Configuring data flows and data pipelines into various data analytics tools like Power BI, Azure Analytics Service, or other data science tools.* Troubleshooting and supporting data issues within different solution products.* Clearly and professionally communication with technical and non-technical stakeholders.**Preferred Qualifications*** Knowledge of Ansible language.* Experience in building data models for structured and unstructured data.* Experience with transforming data with various ETL tools and/or scripting.* Familiarity with Agile methodologies.* Demonstrates accountability and ownership of deliverables.* Proven expertise in cloud-based data architecture (Azure), large-scale data processing, and integration of diverse data sources.* Familiarity with operational workflows in oil & gas or industrial domains is a plus.* CI CD Pipelines experience is a plus.Relocation Options:Relocation **could be** considered.International Considerations:Expatriate assignments **will not be** considered.Chevron participates in E-Verify in certain locations as required by law. #J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Engineer

Charles Taylor

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Background

Digital innovation is reshaping the insurance industry - We’re making it happen. Charles Taylor InsureTech was established to help insurance businesses drive change through the delivery of technology enabled solutions. We don’t have a one-size-fits-all approach or prescriptive methodology. We work consultatively with our clients to revitalise their operations, reinvent established processes and implement future-ready solutions that deliver measurable benefit and improve data-driven decision making.

Charles Taylor is a global provider of professional services and technology solutions dedicated to enabling the global insurance market to do its business fundamentally better. Dating back to 1884, Charles Taylor now is currently in more than 120 locations spread across 30 countries in Europe, the Americas, Asia Pacific, the Middle East and Africa.

Charles Taylor believes that it holds a distinctive position in its markets in that it is able to provide professional services and technology solutions in order to support every stage of the insurance lifecycle and every aspect of the insurance operating model. Charles Taylor serves a diversified blue-chip international customer base that includes national and international insurance companies, mutuals, captives, MGAs, Lloyd's syndicates and reinsurers, along with brokers, distributors and corporate insureds.

Charles Taylor has three distinct business areas – Claims Services, InsureTech and Insurance Management.

Charles Taylor was recently acquired by an investment company managed and controlled by Lovell Minnick Partners LLC. Lovell Minnick is a US Private Equity firm that invests in the global financial services industry, including related technology and business services companies, with a focus on helping to build long term value for clients, employees and shareholders. The acquisition will support the continuation of Charles Taylor's successful growth strategy, with a focus on expanding client relationships, broadening specialist capabilities and the range of services and technology solutions, deepening geographic coverage, and reinvesting in quality of service and technology.

For more information, please visit

The Role

We are looking for a Senior Data Engineer with proven experience in Big Data ecosystems to design, build, and optimize data pipelines and aggregated models that enable reliable and scalable reporting. This role requires strong hands-on expertise with Spark, Scala, and Impala , as well as a solid background in implementing ETL processes in complex environments.

Key Responsibilities
  • Demonstrate and champion Charles Taylor Values by ensuring Agility, Integrity, Care, and Accountability and Collaboration.
  • Design, develop, and maintain ETL processes for large-scale data ingestion, transformation, and integration.
  • Build aggregated data models to support business reporting and analytics.
  • Work with Big Data technologies (Spark, Scala, Impala) to ensure high-performance data pipelines.
  • Optimize queries and processing workflows for efficiency and scalability.
  • Partner with business stakeholders, data analysts, and BI teams to translate requirements into technical solutions.
  • Ensure data quality, consistency, and governance across all layers of the architecture.
  • Contribute to the continuous improvement of development practices and data engineering standards.
Required Skills
  • 3+ years of experience in Data Engineering or similar roles.
  • Proven expertise in Spark (Scala) and Impala within Big Data environments.
  • Strong experience with ETL design, implementation, and optimization .
  • Advanced knowledge of distributed computing and parallel processing .
  • Solid understanding of data modeling and performance tuning.
  • Familiarity with data governance, data quality frameworks, and best practices .
  • Strong problem-solving skills and ability to work in agile, fast-paced environments.
  • Excellent communication skills in English (written and spoken).
Why join Charles Taylor InsureTech?

The Charles Taylor InsureTech team blends hands-on insurance expertise with fresh thinking from the worlds of technology consulting, financial services, ecommerce and beyond.

As a newly established business which is part of Charles Taylor, we combine the agility of a start up with the security and scale of a corporate. The result is a pragmatic-yet-pioneering approach that we’ve used to help clients reimagine central market systems, launch self-service digital insurance products, automate regulatory reporting requirements and more.

We are very proud of the fact that nine out of ten of our people recommend Charles Taylor as a place to work. We pride ourselves on having a positive work environment where our people are empowered to make the best decisions and where learning is valued highly and shared across our business.

We are very committed to ensuring our people are given continuous learning and development.As well as structured induction programmes and job training, we provide study support for relevant professional qualifications and have a Core Learning & Development Curriculum.

Charles Taylor is a fun and inclusive place to work where people are truly valued and encouraged to enjoy a host of social and sporting activities available. Quiz nights, tennis tournaments, football matches and a range of other events take place throughout the year

Equal Opportunity Employer

Here at Charles Taylor we are proud to be an Inclusive Employer. We provide an environment of mutual respect with zero tolerance to discrimination of any kind regardless of age, disability, gender identity, marital/ family status, race, religion, sex or sexual orientation.

Our external partnerships and the dedicated work we do in promoting a transparent and fair recruitment and selection process all contribute to the successful, inclusive and diverse culture and environment which we are proud to be a part of at Charles Taylor.

#J-18808-Ljbffr

Lo sentimos, este trabajo no está disponible en su región

Ubicaciones cercanas

Otros trabajos cerca de mí

Industria

  1. gavelAdministración Pública
  2. workAdministrativo
  3. ecoAgricultura y Silvicultura
  4. restaurantAlimentos y Restaurantes
  5. apartmentArquitectura
  6. paletteArte y Cultura
  7. diversity_3Asistencia Social
  8. directions_carAutomoción
  9. flight_takeoffAviación
  10. account_balanceBanca y Finanzas
  11. spaBelleza y Bienestar
  12. shopping_bagBienes de consumo masivo (FMCG)
  13. point_of_saleComercial y Ventas
  14. shopping_cartComercio Electrónico y Medios Sociales
  15. shopping_cartCompras
  16. constructionConstrucción
  17. supervisor_accountConsultoría de Gestión
  18. person_searchConsultoría de Selección de Personal
  19. request_quoteContabilidad
  20. brushCreativo y Digital
  21. currency_bitcoinCriptomonedas y Blockchain
  22. health_and_safetyCuidado de la Salud
  23. schoolEducación y Formación
  24. boltEnergía
  25. medical_servicesEnfermería
  26. biotechFarmacéutico
  27. manage_accountsGestión
  28. checklist_rtlGestión de Proyectos
  29. child_friendlyGuarderías y Educación Infantil
  30. local_gas_stationHidrocarburos
  31. beach_accessHostelería y Turismo
  32. codeInformática y Software
  33. foundationIngeniería Civil
  34. electrical_servicesIngeniería Eléctrica
  35. precision_manufacturingIngeniería Industrial
  36. buildIngeniería Mecánica
  37. scienceIngeniería Química
  38. handymanInstalación y Mantenimiento
  39. smart_toyInteligencia Artificial y Tecnologías Emergentes
  40. scienceInvestigación y Desarrollo
  41. gavelLegal
  42. clean_handsLimpieza y Saneamiento
  43. inventory_2Logística y Almacenamiento
  44. factoryManufactura y Producción
  45. campaignMarketing
  46. local_hospitalMedicina
  47. perm_mediaMedios y Relaciones Públicas
  48. constructionMinería
  49. sports_soccerOcio y Deportes
  50. medical_servicesOdontología
  51. schoolPrácticas
  52. emoji_eventsRecién Graduados
  53. groupsRecursos Humanos
  54. securitySeguridad de la Información
  55. local_policeSeguridad y Vigilancia
  56. policySeguros
  57. support_agentServicio al Cliente
  58. home_workServicios Inmobiliarios
  59. diversity_3Servicios Sociales
  60. wifiTelecomunicaciones
  61. psychologyTerapia
  62. local_shippingTransporte
  63. storeVenta al por menor
  64. petsVeterinaria
Ver todo Data Engineer Empleos