| Date | No | Name | ID | Topic |
Presentation (50%) |
Report (25%) |
Roll Call (25%) |
Final Score |
Grade |
| 1/22 | 1 | 孔昊然 | 225040481 | GPU Communication Systems: Collective Communication Libraries |
82 |
||||
| 2 | 黄嘉铭 | 224040352 | Research on Automated Code and Test Assertion Generation with LLMs |
95 |
|||||
| 1/27 | 3 | 张馨元 | 225045037 | End-to-End AI Inference Systems for Real-Time Healthcare |
86 |
||||
| 4 | 彭一凡 | 225040521 | Real-time System Optimization for ROS 2: Scheduling and Communication |
96 |
|||||
| 5 | 齐希贤 | 120090691 | Beyond Algorithms: Hardware-Constrained Vector Search Databases |
93 |
|||||
| 1/29 |
6 |
裴承轩 | 225040508 | PD-Disaggregation in Large Language Models |
92 |
||||
|
7 |
陈张天艺 | 225040511 | Tracing Operation System's Microkernel Journey and Its Performance Trade-offs |
95 |
|||||
| 2/3 | 8 | 贾钊 | 225040505 | Efficient Scheduling in Distributed OS | 88 | ||||
| 9 | 张启航 | 119010434 | When LLMs Become OS Operators: Rethinking Trust and Isolation | 92 | |||||
| 10 | 陈俊颖 | 223040263 | Evolution of Medical LLM Training Systems | 94 | |||||
| 2/5 | 11 | 庞威 | 225040490 | Scheduling Deep Learning on GPU Clusters |
98 |
||||
| 12 | 陈启旭 | 120090643 | Elastic Resource Provisioning in Cloud Platforms via Workload Prediction and Performance Modeling |
90 |
|||||
| 3/5 | |||||||||
| 13 | 毛宇 | 118010224 | Data-Driven Predictive Control for Cloud Resource Management | ||||||
| 14 | 王瑞翔 | 225040514 | |||||||
| 3/10 | |||||||||
| 15 | 张文谦 | 225040483 | Language Model as OS | ||||||
| 16 | 李辉 | 224040351 | Profile-Guided Optimization for Various Applications (OS kernel and data warehouse) | ||||||
| 3/12 | |||||||||
| 17 | 周炫宁 | 225045030 | Breaking the Memory Wall: FlashAttention and the Philosophy of IO-Aware Systems | ||||||
| 18 | 颜小川 | 225045041 | Sharpen the Spec, Cut the Code: A Case for Generative File System with SYSSPEC | ||||||
| 3/17 | |||||||||
| 19 | 沈宇昊 | 225045038 | |||||||
| 20 | 张书纶 | 225045020 | LLM agent memory system | ||||||
| 3/19 | |||||||||
| 21 | 葛文韬 | 119010080 | ZeRO: Memory Optimizations Toward Training Trillion Parameter Models | ||||||
| 22 | 倪钦科 | 225045036 | GPU Virtualization and Scheduling Strategies for Low-Latency Speech Inference | ||||||
| 3/24 | 23 | 廖欢 | 225040515 | System Optimizations for Real-Time Streaming Speech Dialogue | |||||
| 24 | 王慧中 | 224045005 | |||||||
| 3/26 | |||||||||
| 25 | 房子皓 | 120090326 | |||||||
| 26 | 谭峙轩 | 225040506 | |||||||
| 3/31 | 27 | 谢缘 | 224040374 | KV Cache Management for Efficient LLM Serving | |||||
| 28 | 卢启晟 | 225040482 | |||||||
| 4/2 | 29 | 王曼仪 | 225045034 | lm for code, code agent, swe-bench (benchmark for software engineering tasks) | |||||
| 30 | 王楚娇 | 224045007 | |||||||
| 4/7 | 31 | 戴世成 | 225040523 | Operating System Support for Large-Scale Graph Learning | |||||
| 32 | 陈骏安 | 225040494 | Distributed Architectures for Large-Scale Deep Learning Training | ||||||
| 4/9 | 33 | 吴冠宗 | 224045015 | ||||||
| 34 | Juan Albert Wibowo | 121040001 | Beyond the Kernel: Operating System Abstractions for Hybrid Agent Orchestration and Privacy-Preserving Dispatch | ||||||
| 4/14 | 35 | 朱桐 | 225040538 | OS-inspired LLM systems, such as paging file system for LLM for context management and memory and syscall for LLM for safe function calling | |||||
| 36 | 张书源 | 225040535 | From Docker to Kubernetes: A History of Container Management | ||||||
| 4/16 | 37 | 郑博文 | 225040500 | ||||||
| 38 | 李钺 | 225040518 | |||||||
| 4/21 | 39 | 谢波涛 | 225045044 | CXL-Enabled Memory Pooling: Redefining Memory Management in Distributed Systems | |||||
| 40 | 王匡 | 224040348 | |||||||
| 4/23 | 41 | 李煜东 | 225040501 | ||||||
| 42 | 胥瑶瑶 | 224040357 | |||||||
| 4/28 | 43 | 刘效源 | 120040051 | Processes vs. Threads: Optimizing Large-Scale Audio Data Processing |