AIOps Advisory Researcher
Lenovo (Schweiz) GmbH
Publication date:
28 May 2025Workload:
100%Contract type:
Permanent position- Place of work:Roche
- Salary estimate from jobup.ch:Log in, to see estimate from jobup.ch
Job summary
Join Lenovo as we innovate in AI for server diagnostics. Enjoy a hybrid role!
Tasks
- Design AI models for diagnosing server failures and anomalies.
- Use time-series analysis to predict system behavior and faults.
- Collaborate with teams in China and globally on key projects.
Skills
- 5+ years in coding, data management, and fluent in Mandarin.
- Expertise in data management and end-to-end ETL pipeline design.
- Proficient in machine learning frameworks like TensorFlow or PyTorch.
Is this helpful?
Why Work at Lenovo
Description and Requirements
***This is a hybrid role, and this candidate must be willing to work onsite three days a week in our Morrsiville, NC office.***
***This role requires fluent ability to speak Mandarin. Only candidates that are fluent in both English and Mandarin will be considered.***
Job Responsibilities:
- Design and implement cutting-edge AI models tailored for server failure diagnosis and anomaly detection.
- Leverage time-series analysis techniques to model and predict system behavior, identifying potential faults or irregularities based on historical data trends.
- Integrate machine learning algorithms to provide real-time predictions and root cause analysis.
- Refine data models to improve diagnostic accuracy and system reliability in mission-critical environments.
- Bridge communication and facilitate the collaboration between ICI Lab China and ISG/SSG WW teams on multiple critical business projects.
- Manage data, models, and infrastructure which is mandatory to align with BIS compliance.
Minimum Requirements:
- 5+ years of relevant work experience in coding and data management.
- Fluent in Chinese communication with professional proficiency in reading/writing technical documentation.
- Expertise in data management, data cleaning, feature engineering, and end-to-end ETL pipeline design.
- Advanced performance tuning experience with relational databases (MySQL/PostgreSQL) and NoSQL databases (MongoDB/Cassandra).
- Proficient in server hardware management, including hardware troubleshooting, cluster maintenance, and performance optimization for large-scale computing environments.
Preferred Requirements:
- Master’s degree or higher in Computer Science, Software Engineering, or related fields.
- Experience in NLP, machine learning, deep learning, data mining, or pattern recognition.
- Strong familiarity with LSTM/CNN/RNN architectures.
- Experience with BERT, LLMs (Large Language Models), or related transformer-based models.
- Proficient in at least one machine learning framework (TensorFlow/PyTorch/Keras) with production-level implementation experience.
- Experience with Spark/Flink for building PB-scale data processing pipelines.
- Working knowledge of data lake technologies (Delta Lake/Hudi) and cloud data warehouses (Snowflake/Redshift).