AIOps Advisory Researcher
Lenovo (Schweiz) GmbH
Date de publication :
28 mai 2025Taux d'activité :
100%Type de contrat :
Durée indéterminée- Lieu de travail :Roche
- Estimation salariale de jobup.ch:Se connecter pour voir l’estimation de jobup.ch
Résumé de l'emploi
Rejoindre Lenovo, c'est intégrer une entreprise innovante à la pointe de la technologie. Ce poste hybride offre un environnement dynamique avec de nombreux avantages.
Tâches
- Concevoir des modèles IA pour le diagnostic des pannes serveurs.
- Utiliser l'analyse de séries chronologiques pour prédire le comportement.
- Collaborer avec les équipes pour des projets critiques.
Compétences
- 5+ ans d'expérience en gestion de données et codage requis.
- Maîtrise de la gestion des données et des bases de données.
- Compétences en machine learning et optimisation des performances.
Est-ce utile ?
Why Work at Lenovo
Description and Requirements
***This is a hybrid role, and this candidate must be willing to work onsite three days a week in our Morrsiville, NC office.***
***This role requires fluent ability to speak Mandarin. Only candidates that are fluent in both English and Mandarin will be considered.***
Job Responsibilities:
- Design and implement cutting-edge AI models tailored for server failure diagnosis and anomaly detection.
- Leverage time-series analysis techniques to model and predict system behavior, identifying potential faults or irregularities based on historical data trends.
- Integrate machine learning algorithms to provide real-time predictions and root cause analysis.
- Refine data models to improve diagnostic accuracy and system reliability in mission-critical environments.
- Bridge communication and facilitate the collaboration between ICI Lab China and ISG/SSG WW teams on multiple critical business projects.
- Manage data, models, and infrastructure which is mandatory to align with BIS compliance.
Minimum Requirements:
- 5+ years of relevant work experience in coding and data management.
- Fluent in Chinese communication with professional proficiency in reading/writing technical documentation.
- Expertise in data management, data cleaning, feature engineering, and end-to-end ETL pipeline design.
- Advanced performance tuning experience with relational databases (MySQL/PostgreSQL) and NoSQL databases (MongoDB/Cassandra).
- Proficient in server hardware management, including hardware troubleshooting, cluster maintenance, and performance optimization for large-scale computing environments.
Preferred Requirements:
- Master’s degree or higher in Computer Science, Software Engineering, or related fields.
- Experience in NLP, machine learning, deep learning, data mining, or pattern recognition.
- Strong familiarity with LSTM/CNN/RNN architectures.
- Experience with BERT, LLMs (Large Language Models), or related transformer-based models.
- Proficient in at least one machine learning framework (TensorFlow/PyTorch/Keras) with production-level implementation experience.
- Experience with Spark/Flink for building PB-scale data processing pipelines.
- Working knowledge of data lake technologies (Delta Lake/Hudi) and cloud data warehouses (Snowflake/Redshift).