Jiahong Chen, University of Sheffield
RISCS Associate Fellow
Dr Jiahong Chen is a Senior Lecturer in Law at the University of Sheffield, and previously worked at the University of Nottingham as a Research Fellow in IT Law, and before that, completed his PhD at the University of Edinburgh. His research focuses on the intersection between law and technology, in particular data protection law, cyber security law, law and AI, data ethics, and internet regulation, with extensive experience in using co-creation methods to address the sociotechnical challenges of cyber security in smart homes. His work has been published in leading peer-reviewed journals, such as International Data Privacy Law, and his oral and written evidence has been cited by parliamentary inquiry reports.
He is currently leading the ESRC-funded project, ‘The Internet of Tactical Engagement (IoTE)’ (ES/Y00020X/1), as PI, addressing public attitudes towards data-driven public communications in smart homes and the associated regulatory implications. He is also a Co-I on the £4.38m Responsible AI UK (RAI UK) Keystone project, ‘Addressing Socio-technical Limitations of Large Language Models for Medical and Social Computing (AdSoLve)’ (EP/Y009800/1), leading the legal use case to tackle the challenges around ChatGPT-like generative AI systems. Previously, his funded projects have also addressed privacy and broader societal issues in the contexts of smart homes, financial services, and manufacturing.

Nandita Pattnaik, Kingston University London
RISCS Associate Fellow
Dr Nandita Pattnaik, a Computer Science Lecturer at Kingston University London, researches cyber security, AI, and human-centred computing. She explores how people and organisations interact with complex digital systems and how cyber risks and harms arise from these interactions among technology, human behaviour, organisational practices, and societal structures. Her interdisciplinary work combines natural language processing, machine learning, LLMs, and empirical research to address security and privacy risks in multi-device, multi-user, online, and organisational contexts, aiming to improve risk detection, modelling, and mitigation.
She has recently contributed to a project funded by the National Cyber Security Centre (NCSC), delivered through the Alan Turing Institute. This project investigated the application of large language models (LLMs) and knowledge graphs to analyse organisational data flows and cyber risk interdependencies. The work focused on synthesising extensive volumes of unstructured organisational data, modelling relationships between entities, and visualising how the routine, daily operational and functional activities of people or systems within an organisation, which contribute to the flow of data to external entities, could involve potential cyber risks. The visualisations supported communication between technical and non-technical stakeholders, reinforcing the role of the human-in-the-loop in cyber risk management.
She played a key role in nationally significant research on ransomware harm and victim experience while working in collaboration with the University of Kent and the Royal United Services Institute (RUSI). By developing harm models and analysing ransomware’s impacts on critical national infrastructure, the project demonstrated how these cyber incidents propagate harm across organisations, infrastructure, and society.
Nandita holds a PhD in Computer Science from the University of Kent, where her doctoral research investigated security and privacy perspectives of non-expert users in modern homes with multiple devices and users. Her PhD employed large-scale data analysis, NLP, and qualitative methods, and resulted in publications in leading venues including ACM Computing Surveys, Computers & Security, and CSCW (Computer Supported Cooperative Work).
Lena Podoletz, Lancaster University
RISCS Associate Fellow
Dr Lena Podoletz is a Lecturer in Security and Protection Science at Lancaster University School of Social Sciences. Her research interests lie in criminology and in Science and Technology Studies (STS) with a particular interest in how crime, security, and technology intersect. Lena’s most recent research examines the resilience of critical national infrastructure, cybersecurity culture, and online financial fraud. She is interested in exploring how AI and ML-based technologies may be used both to offend and to enhance security, and in using qualitative techniques to understand the sociotechnical dimensions of cyber security. Among other topics she has researched are automated affect recognition, smart home technologies and policing, extended reality in police training, automated social security, and algorithmic bias and transparency. Lena is a Co-I for the Lancaster University Policing Academic Centre of Excellence (L-PACE). Her work has been published in leading international venues, including ACM FAccT, ACM VRST and AI & Society.

William Seymour, King’s College London
RISCS Associate Fellow
William Seymour is a Lecturer in Cybersecurity at the King’s College London Department of Informatics. He conducts interdisciplinary work at the intersection of security, privacy, HCI, ethics, and law using a combination of computational and social science research methods. His work explores people’s concerns about using AI systems, what values those systems should embody, and how they can better meet the needs of the people who use them. He has worked with a wide range of public sector and industry partners including Microsoft, BRE Group, and the Information Commissioner’s Office. William’s current work focuses on consent for novel interfaces such as voice, as well as the moderation of conversational software on platforms and marketplaces. Before coming to King’s as a postdoctoral researcher, he obtained a DPhil in Cyber Security from the University of Oxford.

Sarah Turner, University College London
RISCS Associate Fellow
Dr Sarah Turner is a senior research fellow at the Knowledge Lab, part of UCL’s Institute of Education. She is a current recipient of the British Academy’s postdoctoral research fellowship, which funds three years of research: Sarah’s project is exploring children’s entanglements with generative AI (and other digital technologies). This research is child-centred, meaning that child participants are pivotal not only in collecting and analysing data, but also in shaping the outcomes to reflect their needs and lived experience, regardless of what that looks like.
The framing of this research project stems from Sarah’s previous research. Her overarching research interest is how emerging technologies pose threats and risks to groups they were not primarily designed to serve. This has naturally led to the exploration of children’s roles in the management of their data and the limitations on and agency around their technology use. Additionally, she has worked on research projects exploring how people recognise that their smart home devices are suffering from cyber attacks, and the societal, organisational and individual impacts of ransomware in the UK.
She holds a PhD in Computer Science from the University of Kent, and a Masters in Digital Technology and Public Policy from UCL. She is an honorary member of the Institute of Cyber Security for Society at the University of Kent. More broadly, she also has an MBA and an LLB and has spent a little over a decade working in international financial services firms, managing specific regulatory risks arising from trading and investment banking activities. Her undergraduate degree was in Literae Humaniores, which she studied at the University of Oxford.
