How we’re using human behavioural science to improve defence and security
In defence and security, the human element can often be considered the weakest link. Yet it can also be our strongest asset.
Understanding how and why people act the way they do is hugely important when considering systems and processes. Thinking about how people process information, means we can better anticipate their interaction within the system.
How susceptible are they to simple mistakes or misunderstandings that can impact security controls? How can technology be designed to minimise and mitigate human vulnerabilities? Addressing these questions creates more secure, user-friendly systems that account for human tendencies and limitations.
Here at Frazer-Nash Consultancy, our Applied Human Sciences team is helping organisations, large and small, navigate the complexities of human behaviour to achieve the best outcomes. This is particularly important when it comes to decision-making and communication. From culture and leadership to command and control, AI to cyber security, they’re helping organisations better understand multi-faceted human behaviour across a range of scenarios and environments.
Our team applies theory, insights and methods from various behavioural sciences for a systematic analysis of existing problems and developing effective solutions. This expert group of human scientists conduct innovative research to find commonalities, provide reasoning, explain behaviours and predict future actions. By looking at behaviour at individual, team and organisation levels, they enhance performance and drive better outcomes.
Here’s a flavour of what our Applied Human Sciences team has been up to:
- Organisation Resilience Against Cyber Attacks
The team developed the PREPARE model of organisational resilience to cyber-attacks, defining what ‘good’ looks like from a people and process perspective. They carried out detailed research to provide tangible recommendations and guidance for cyber resilience. Used by Dstl and the MOD, the model allows organisations to assess current performance and identify bespoke areas for improvement.
- The Human Impact in Software Development
Cognitive burden is the effort used in someone’s working memory. As part of the development of a new software tool, the team developed use cases, user journeys and stories to understand how people would apply the tool. The insight into the user experience and interface design meant there was confidence that people would use the tool in the way it was intended to collect robust data.
- Defining User Requirements for Robotics and Autonomous Systems
Here, the team identified specific task requirements from both physical and cognitive perspectives and the human risks, assumptions, issues, dependencies and opportunities (RAIDO). This informed the development of a set of system integration requirements for the future development of crewed stations.
- Behavioural Analytics and AI in UK National Security
The team contributed to a research project exploring the use of behavioural analytics and artificial intelligence within UK national security. It was led by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) and involved a multidisciplinary team of researchers, technologists, former policymakers and practitioners from fields such as data science, artificial intelligence, cybersecurity, criminology, law and ethics.
The resulting policy report sets the scene for further exploration and broader innovation within national security and encourages the responsible and effective deployment of behavioural analytics in UK national security.
- Employee Behaviour Towards Cyber Security and the Influence of Culture
The team conducted focus groups within the military to understand the barriers and enablers to cyber security. They gathered information to assess and understand the underlying drivers of employees’ behaviours in specific focus areas. They also studied the importance of culture and its impact on change and adoption of technologies.
- Understanding Human Behaviour to Train AI Agents
By analysing the cognitive processes humans use to complete tasks, the team can translate them into machine language for AI programming. They captured the thought processes of ethical hackers during penetration testing to understand their actions at each stage of infiltration. Their decision-making and reasoning were then mapped into a machine-learning algorithm.
Get in touch with us today.
To find out how our Applied Human Sciences team can help you, get in touch with us today.