4.362 Ofertas de Data Scientist, Applied Ai--latin America--remote en Argentina

Data Scientist, Applied AI - Latin America - Remote

Quilmes, Buenos Aires Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Mercedes, Buenos Aires Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Avellaneda, Buenos Aires Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Corrientes, Corrientes Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Chaco, Chaco Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Vicente López, Buenos Aires Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región
Sé el primero en saberlo

Acerca de lo último Data scientist, applied ai--latin america--remote Empleos en Argentina !

Data Scientist, Applied AI - Latin America - Remote

San Isidro, Buenos Aires Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Resistencia, Chaco Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Data Scientist, Applied AI - Latin America - Remote

Rosario, Santa Fe Azumo, LLC

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Azumo is currently looking for a highly motivated Data Scientist / Machine Learning Engineer to develop and enhance our data and analytics infrastructure. The position is FULLY REMOTE , based in Latin America.

This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied AI . If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the Generative AI space, this role is perfect for you. We are seeking a skilled professional with expertise in Python for production-level projects, proficiency in machine learning and deep learning techniques such as CNNs and Transformers , and hands-on experience working with PyTorch .

We’re looking for a versatile Machine Learning Engineer / Data Scientist to join our big-data analytics team. In this hybrid role you’ll not only design and prototype novel ML/DL models , but also productionize them end-to-end, integrating your solutions into our data pipelines and services. You’ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.

Key Responsibilities

Model Development & Productionization

  • Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting).
  • Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch .
  • Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications.
  • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases.
  • Build robust pipelines to deploy models at scale (Docker , Kubernetes , CI/CD ).

Data Engineering & MLOps

  • Ingest, clean and transform large datasets using libraries like pandas , NumPy , and Spark .
  • Automate training and serving workflows with Airflow or similar orchestration tools.
  • Monitor model performance in production; iterate on drift detection and retraining strategies.
  • Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs.

Software Development Best Practices

  • Write production-grade Python code following SOLID principles, unit tests and code reviews.
  • Collaborate in Agile (Scrum) ceremonies; track work in JIRA .
  • Document architecture and workflows using PlantUML or comparable tools.

Cross-Functional Collaboration

  • Communicate analysis, design and results clearly in English.
  • Partner with DevOps, data engineering and product teams to align on requirements and SLAs.

At Azumo we strive for excellence and strongly believe in professional and personal growth. We want each individual to be successful and pledge to help each achieve their goals while at Azumo and beyond. Challenging ourselves and learning new technologies is at the core of what we do. We believe in giving back to our community and will volunteer our time to philanthropy, open source initiatives and sharing our knowledge.

Based in San Francisco, California, Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility. We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.

If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at

Minimum Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science or related field.
  • 5+ years of professional experience with Python in production environments.
  • Solid background in machine learning & deep learning (CNNs , Transformers , LLMs ).
  • Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization).
  • Proven track record deploying ML solutions .
  • Expert in pandas , NumPy and scikit-learn .
  • Familiarity with Agile/Scrum practices and tooling (JIRA , Confluence ).
  • Strong foundation in statistics and experimental design.
  • Excellent written and spoken English.

Preferred Qualifications

  • Experience with cloud platforms (AWS , GCP , or Azure ) and their AI-specific services like Amazon SageMaker , Google Vertex AI , or Azure Machine Learning .
  • Familiarity with big-data ecosystems (Spark , Hadoop ).
  • Practice in CI/CD & container orchestration (Jenkins/GitLab CI , Docker , Kubernetes ).
  • Exposure to MLOps/LLMOps tools (MLflow , Kubeflow , TFX ).
  • Experience with Large Language Models , Generative AI , prompt engineering , and RAG pipelines .
  • Hands-on experience with vector databases (e.g., Pinecone , FAISS ).
  • Experience building AI Agents and using frameworks like Hugging Face Transformers , LangChain or LangGraph .
  • Documentation skills using PlantUML or similar.
  • Paid time off (PTO)
  • U.S. Holidays
  • Training
  • Udemy free Premium access
  • Mentored career development
  • Profit Sharing
  • $US Remuneration

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Ubicaciones cercanas

Otros trabajos cerca de mí

Industria

  1. gavelAdministración Pública
  2. workAdministrativo
  3. ecoAgricultura y Silvicultura
  4. restaurantAlimentos y Restaurantes
  5. apartmentArquitectura
  6. paletteArte y Cultura
  7. diversity_3Asistencia Social
  8. directions_carAutomoción
  9. flight_takeoffAviación
  10. account_balanceBanca y Finanzas
  11. spaBelleza y Bienestar
  12. shopping_bagBienes de consumo masivo (FMCG)
  13. point_of_saleComercial y Ventas
  14. shopping_cartComercio Electrónico y Medios Sociales
  15. shopping_cartCompras
  16. constructionConstrucción
  17. supervisor_accountConsultoría de Gestión
  18. person_searchConsultoría de Selección de Personal
  19. request_quoteContabilidad
  20. brushCreativo y Digital
  21. currency_bitcoinCriptomonedas y Blockchain
  22. health_and_safetyCuidado de la Salud
  23. schoolEducación y Formación
  24. boltEnergía
  25. medical_servicesEnfermería
  26. biotechFarmacéutico
  27. manage_accountsGestión
  28. checklist_rtlGestión de Proyectos
  29. child_friendlyGuarderías y Educación Infantil
  30. local_gas_stationHidrocarburos
  31. beach_accessHostelería y Turismo
  32. codeInformática y Software
  33. foundationIngeniería Civil
  34. electrical_servicesIngeniería Eléctrica
  35. precision_manufacturingIngeniería Industrial
  36. buildIngeniería Mecánica
  37. scienceIngeniería Química
  38. handymanInstalación y Mantenimiento
  39. smart_toyInteligencia Artificial y Tecnologías Emergentes
  40. scienceInvestigación y Desarrollo
  41. gavelLegal
  42. clean_handsLimpieza y Saneamiento
  43. inventory_2Logística y Almacenamiento
  44. factoryManufactura y Producción
  45. campaignMarketing
  46. local_hospitalMedicina
  47. perm_mediaMedios y Relaciones Públicas
  48. constructionMinería
  49. sports_soccerOcio y Deportes
  50. medical_servicesOdontología
  51. schoolPrácticas
  52. emoji_eventsRecién Graduados
  53. groupsRecursos Humanos
  54. securitySeguridad de la Información
  55. local_policeSeguridad y Vigilancia
  56. policySeguros
  57. support_agentServicio al Cliente
  58. home_workServicios Inmobiliarios
  59. diversity_3Servicios Sociales
  60. wifiTelecomunicaciones
  61. psychologyTerapia
  62. local_shippingTransporte
  63. storeVenta al por menor
  64. petsVeterinaria
Ver todo Data Scientist, Applied Ai--latin America--remote Empleos