Christian Agrell, as a Principal Scientist at DNV’s Group Research and Development, leads the Risk and Modelling Technologies group with a focus on pioneering digital assurance methods. With a PhD in machine learning for high-risk applications, Christian’s expertise lies in creating robust, responsible, and ethical AI systems, particularly in settings where risk management is crucial.
His team at DNV is deeply involved in strategic research, developing tools and methodologies for the future of digital assurance. A significant part of their work revolves around the assurance of AI, aiming to ensure that AI applications are safe, robust, accurate, and fair, while also maintaining high standards of explainability, transparency, privacy, and security. This work is vital in an era where AI’s impact on society and various industries continues to grow exponentially.
Another key area of Christian’s research is probabilistic machine learning and uncertainty quantification. His team is making strides in developing AI systems capable of acknowledging their limits with the ability to indicate when they “don’t know”. This approach is particularly important in high-risk scenarios where accurate prediction and understanding of uncertainties can be a matter of safety and operational reliability.
Furthermore, Christian’s work involves the innovative combination of physics-based and data-driven modelling. This involves integrating phenomenological knowledge with data-driven models to produce “physically obedient models”. These models adhere not only to the laws of physics but also align with observed data, providing a more comprehensive and reliable understanding of complex systems.
Under Christian Agrell’s leadership, his group at DNV is setting new standards in the field of digital assurance, ensuring that the rapid advancement of AI technologies aligns with the principles of safety, reliability, and ethical responsibility.