Invited Keynotes
Title: Nomadic honeypots: How to create actionable cyber threat intelligence?
Title: Privacy and Cybersecurity Issues in the Era of Fake News and Generative Artificial Intelligence
Cybersecurity and data privacy have become increasingly significant in the era of fake news and generative AI. The proliferation of fake news, deepfakes, and misleading content across social media platforms is a growing problem.
Generative AI, built on large language models, has also shown the potential to transform cybersecurity. It has led to new malicious threats such as DeepLocker, FraudGPT, and WormGPT.
In this talk, I will delve into the evolving tactics of cyber attackers leveraging AI-generated content to orchestrate more convincing data privacy invasions, cyber attacks, and spread misinformation. In parallel, I will analyze the emerging defensive strategies employing Generative AI for threat detection, and simulation of cyber attacks for training purposes. The final discussion aims to provide insights into creating resilient cybersecurity and privacy frameworks capable of combating the challenges posed in the era of fake news and Generative AI.
Title: Formal verification of security protocols: the Squirrel prover
Security protocols are widely used today to secure transactions that take place through public channels like the Internet. Common applications involve the secure transfer of sensitive information like credit card numbers or user authentication on a system. Because of their increasing ubiquity in many important applications (e.g. electronic commerce, smartphone, government-issued ID . . . ), a very important research challenge consists in developing methods and verification tools to increase our trust on security protocols, and so on the applications that rely on them.
Formal methods have introduced various approaches to prove that security protocols indeed guarantee the expected security properties. Tools like ProVerif and Tamarin analyse protocols in the symbolic model, leveraging techniques from model-checking, automated reasoning, and concurrency theory. However, it’s essential to note that security in the symbolic model doesn’t necessarily imply security in the cryptographer’s standard model—the computational model—where attackers operate as probabilistic polynomial time Turing machines. Verification techniques for the computational model, though crucial, often exhibit less flexibility or automation compared to tools in the symbolic model.
In recent collaborative efforts, my colleagues and I have proposed a novel approach, building upon the CCSA logic introduced by Gergei Bana and Hubert Comon a few years ago. This approach has been implemented in a new proof assistant called Squirrel and its effectiveness has been validated across various case studies. In this presentation, I will outline the key aspects of the approach, highlight its advantages, and discuss some of the remaining challenges.