What You’ll Do
● Collaborate with product, engineering, and data science teams to design, develop, and deploy highly scalable solutions.
● Develop and support Marigold’s data-driven decision-making process by building, deploying, and maintaining scalable data science models
● Work through all phases of the data science life cycle, including data collection, cleaning, analysis, modeling, validation, and deployment
● Conduct performance analysis on models and optimize for accuracy and speed
● Review team members’ code for model implementations and ensure adherence to coding best practices
● Participate or lead in architecture reviews to validate data modeling and project design across the organization
● Investigate, analyze, and address data quality issues and model performance issues in a timely manner
● Deliver technical documentation and reports for use by internal teams, customers, and partners
What We’re Looking For
● Degree in Data Science, Computer Science, Statistics, or a related field, or equivalent combination of education and experience
● 7+ years of experience in data science, with a focus on deploying models in enterprise, high-scale environments
● Advanced understanding of statistical modeling, machine learning algorithms, and data analysis techniques
● Proficient in Python, R, or similar languages for data science, and performance tuning of models
● Experience working with SQL databases such as MySQL, PostgreSQL, or equivalent
● Experience with big data processing tools such as Apache Spark, Databricks, Clickhouse, or equivalent
● Experience with cloud computing platforms such as AWS, GCP, or equivalent for data infrastructure
● Excellent communication skills, both verbal and written, with the ability to explain complex technical concepts to non-technical stakeholders
● Demonstrated ability to produce clear and concise technical documentation
What Will Set You Apart
● Experience using modern machine learning frameworks such as TensorFlow, PyTorch, or similar
● Experience with real-time data streaming and processing frameworks such as Kafka, Kinesis, or similar
● Advanced experience working with distributed computing and big data technologies such as Databricks, Snowflake, Clickhouse or similar
● Experience delivering data models and insights at scale, processing and analyzing large datasets in real time
//Data Scientist
Data Scientist
Job Category: Tech
Job Type: Remote
Apply for this position
- 93 views