SOFT's client located in New York, NY is looking for a AI Data Engineer for a long term contract assignment.
Qualifications:
o High proficiency in Python and SQL
o ETL experience with Databricks using PySpark
o Data processing and analytics/AI in Databricks
o Strong understanding of algorithms, data structures, statistics, and mathematics
o GenAI, LLM deployment and monitoring
Responsibilities:
Seeking an experienced Data Analytics/AI Engineer to join our team to:
o Translating complex business problems into data analytics solutions
o Create data products in Databricks while leveraging AI-powered tools to enhance
workflow automation, perform pattern recognition, analyze and generate content
o Collaborate with business users and engineers to align data analytics solutions with
business
Required Technical skills
o Databricks Proficiency: Hands-on experience with AWS Databricks platform; knowledge
of Spark performance and cost-optimization technics (partitioning, clustering, caching)
o Advanced SQL and Spark Skills: Proficiency in writing complex SQL queries and Spark
code (Python/PySpark) for data manipulation, transformation, aggregation, and analysis
tasks within Databricks notebooks
o GenAI and LLMs: Prompt engineering, RAG, experience leveraging Large Language
Models (LLMs); familiarity with tools such as Amazon SageMaker and Bedrock
o Data Engineering & Processing: Experience building ETL pipelines and working with big
data frameworks
o System Integration and Deployment: Deploy AI models and integrate them with
existing systems via APIs
o Data Visualization: Using data visualization tools (Tableau, Power BI) to create
interactive dashboards and communicate insights
o Cloud Computing and MLOps: Proficiency in deploying models using AWS; familiarity
with Domino Data Lab, GitLab CI/CD
o Data Governance and Security: Understanding of data governance principles and
implementing security measures to ensure data integrity, confidentiality, and
compliance within the centralized data lake environmentgoals