秀色直播

News

秀色直播 team awarded CIFAR AI Safety Catalyst Grant to advance developer oversight in AI-assisted coding

Published: 19 January 2026
秀色直播 team aims to develop guidelines, tools, and policy insights that help software engineers work safely and effectively with AI-assisted coding systems.

A 秀色直播 research team is tackling one of AI鈥檚 fastest-moving challenges: how software developers can steer and safeguard code as AI systems become capable of writing large portions of software on their own.听

The team is one of ten across Canada awarded funding through the , part of the new Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Each project receives $100,000 for one year, with support for up to two postdoctoral researchers. The funding was .听

The 秀色直播 project鈥斺淢aintaining Meaningful Control: Navigating Agency and Oversight in AI-Assisted Coding鈥濃攊s led by , Associate Professor in the School of Computer Science, Canada CIFAR AI Chair, and Associate Scientific Co-Director at Mila; , Associate Professor in the School of Computer Science, co-director of the 秀色直播 Software Technology Lab, and Associate Member of Mila; and postdoctoral researcher 听

鈥淒evelopers struggle most with trust and verification鈥澨

AI-assisted coding systems are rapidly transforming software engineering. Today鈥檚 top models can solve more than 60% of well-scoped, real-world tasks in software-engineering such as bug fixes, according to the most recent benchmarks. Additionally, AI companies are actively working on developing more agentic AI systems to execute multi-step software development workflows with little to no human intervention.听

As organizations adopt these tools, developers face increasing pressure to integrate them into their workflows. While the technology promises to improve efficiency and quality, it also introduces new risks.听听听

鈥淒evelopers struggle most with trust and verification of AI generated code,鈥 said Professor Guo. 鈥淭he generated code may look correct, but the developers aren鈥檛 confident about the reliability, correctness, or hidden security issues.鈥 Low-quality code can also create time-consuming review processes, and many developers are unsure whether these tools ultimately boost productivity.听

Despite the growing adoption of AI systems in software engineering and specifically in code generation, clear guidelines around how and what developers should oversee and when they should intervene are still lacking.听听

鈥淭he human-computer interaction [HCI] community has been investigating how these emerging technologies influence software engineering practices, examining issues such as trust, tool adoption, and workflow adaptation,鈥 said听Professor听Guo. 鈥淗owever, research on what effective oversight looks like in AI-supported code generation remains underdeveloped.鈥澨


Research shaping safer AI-assisted coding听听

To close these gaps, the 秀色直播 team aims to develop guidelines, tools, and policy recommendations that help software engineers work safely and effectively with AI-assisted coding systems.听

The project will roll out over multiple phases, starting with identifying key patterns in how developers override, refine, or validate AI-generated code. Through interviews with developers in small and medium-sized companies, the researchers will map key decision points: when suggestions are accepted or rejected, how generated code is reviewed, and what prompts hands-on intervention as systems become more autonomous.听

Building on these findings, the team will co-design an AI-assisted coding interface to support effective oversight when developers use AI to carry out substantial software engineering tasks. The interface will allow developers to set constraints and receive clear explanations of the AI鈥檚 reasoning, uncertainties, and alternatives. The system will adapt dynamically to developer input, creating a shared sense of intent between human and machine.听听

The team will test the interface with developers to evaluate its impact on workflow, confidence, and code quality. They will also experiment with features such as explainability mechanisms, critique prompts, and uncertainty indicators. Depending on the results, follow-up research could turn the findings into a fully designed interface.听


Creating actionable guidelines听听

The project鈥檚 ultimate goal is to produce practical guidelines for ensuring that AI-generated software remains reliable and under meaningful human control. These recommendations will inform best practices for AI developers, software engineers, and policymakers.听

鈥淓ffective oversight is highlighted in many regulatory approaches, including the EU AI Act, but what it looks like in practice is still unclear,鈥 said Shalaleh Rismani听PhD. 鈥淲e think this project can help clarify what effective oversight looks like in real software engineering settings and inform both industry practices and policy discussions in Canada and internationally."听

The project鈥檚 long-term impact may extend into software engineering education, so that students can learn about best practices and ethical considerations of using AI-based coding tools. 听

A multidisciplinary team tackling a national challenge听

秀色直播鈥檚 position as a national leader in AI research makes it a natural home for this work. 鈥溞闵辈モ檚 strong research communities in AI, software engineering, HCI, and AI ethics, and partnerships with institutes such as Mila and the Computational and Data Systems Institute (CDSI), provide an ideal environment for this research to take place,鈥 said听Professor听Cheung. 鈥淭hese interdisciplinary connections allow us to approach this project from different perspectives and support practical impact through collaborations with industry partners.鈥澨

The team emphasizes that CIFAR鈥檚 funding has also enabled a high level of collaboration. 鈥淭he Catalyst Grant allowed us to bring together three researchers with very different but complementary backgrounds,鈥 said听Professor听Cheung. For a project like this, you really need methods and perspectives from multiple disciplines, and the grant made it possible for us to actually build that kind of team.鈥澨

Back to top