Identifying Intentional Algorithmic Bias and its Effect on Model Behaviour
Unconscious algorithmic bias has received considerable attention in the research community and often causes unintentional disproportionate harm to certain parts of society. However, an important new dimension of bias that has not received such attention is intentional bias, i.e., deliberately introducing bias into a system at the learning stage that can skew the system’s behaviour and outcome decisions. This represents an emerging vulnerability with the potential to significantly degrade performance and can have serious consequences such as discriminatory lending, hiring, and fake news proliferation. This interdisciplinary research project studied intentional bias from a technical and social perspective, including how to identify such bias and its effects on semi-supervised and unsupervised model behaviour in economic and financial scenarios such as AI-based financial hedge funds.
Scoping requirements for algorithmic fairness metrics
The use of algorithms to guide decision making is increasingly prevalent—as are concerns about the biases within these algorithms. There is a growing literature that attempts both to mitigate biases and measure fairness (metrics). However, this literature is largely technically focused, with little attention on how to best communicate these metrics. This raises two important questions: 1) are technical metrics widely understood, and 2) are technical metrics for fairness the optimal approach? This project carried out research to understand the current landscape: what fairness metrics are currently being used, how well they work, how well they are understood, and what can be improved in terms of measurement and communication. The research improved the communication of fairness metrics and in turn reduced the misuse of algorithms and thereby promoted transparency and trust. Adopting a mixed-method approach, the scoping phase reviewed relevant literature and elicited user groups’ opinions to inform a testing phase in which metrics were evaluated. The findings were summarised and disseminated at local events to raise awareness and grow interest.
CRESCORDIA (CREating Digitally Sustainable COmmunities in the face of Risks to Digital InfrAstructure)
An increased dependency upon digital services brings with it a unique series of risks, particularly for modern communities whose essential services, such as healthcare, education, energy use, travel and banking, increasingly rely upon stable digital infrastructures. The digital sustainability of such communities is, then, an increasingly important question when it comes to the security of the economic, social and environmental milieu, particularly when communities face trade-offs between multiple objectives (e.g. between digital security and environmental sustainability). This project asked how modern communities can become digitally sustainable in the face of an increased dependency on digital services. Given the basic premise of sustainability to meet the needs of the present without compromising the ability of future generations to meet their own needs, it was essential to evaluate how communities can learn to build resilience to the pressing demands of both present and future generations into their everyday practices.
Balancing Machine Learning Performance with Energy Consumption for a Secure Sustainable Community
Artificial intelligence (including machine learning) has a significant role in enhancing digital sustainability by improving efficiency and reducing waste. However, it is crucial to ensure that AI applications are developed in a sustainable manner. The rapid growth of the AI market results in substantial emissions. This is mainly due to complex AI models that prioritize accuracy over energy efficiency. Notably, training GPT-3, a large language model, for high accuracy, consumed more electricity than 100 US homes consume in a year. It is worth questioning whether such high accuracy is necessary for every application. Research was carried out to understand the energy consumption of creating, training, and operating ML models. This helped us understand that tackling the main drivers of energy consumption related to a models’ computational complexity while maintaining acceptable performance and security, we can develop efficient and sustainable digital solutions. This was achieved by testing both science and industry cases and developing an end-to-end pipeline (from an industry case) for an applicable approach to sustainability.
AISEC (Assessing Vulnerabilities, Transferable Attack Tools and Compromise Indicators of AI Technologies)
This project investigated the salient vulnerabilities of AI technologies throughout their life cycle; how attacks, tools, methods, and vectors proliferate in the international system and between societal sectors in a rapidly changing and highly volatile world order and how transferable they are between AI technologies and sectors; and how government, academia and industry can better identify when AI systems have been compromised through a set of common technical and behavioural indicators. The project drew on political science, strategic studies, computer science and behavioural science competencies to build a network of researchers in the North-West working on the Security of AI. The project also helped establish and disseminate best practices on the security of AI.
AI Code Repair
Standard development libraries focus on delivering efficient AI code, which can often contain security vulnerabilities, necessitating time-consuming fixes for software developers. This project combined Large Language Models (LLMs) with Formal Verification (FV) to ensure standard libraries produce secure and efficient AI code.
The final solution automatically finds and repairs errors in AI code written in C/C++ and is expected to become part of the software development process to produce efficient, robust, and secure AI code.
The Future of Interlinked Computing Phase 2
This project built on the previous work carried out on The Future of Interlinked Computing. Futurists were forecasting the future implications of ubiquitous connectivity, artificial intelligence, mixed reality, low and no-code solutions, and digital ownership, where these challenge UK wellbeing and values. The timeframe considered was the next 15 years or so, specifically to 2040 and identified specific ways in which the forecast changes may affect distinct aspects of the 13 CNI sectors and related national interests.
Error Reporting in High-Reliability Organisations
High-reliability organisations operate with very low failure rates in high-hazard environments, such as aviation, nuclear power plants, and healthcare. Error-reporting (e.g., incidents, near misses) in these organisations is critical to organisational learning and continuous improvement. This project examined the methods that High-Reliability Organisations use to ensure error-reporting and evaluate their effectiveness. The project conducted rapid evidence review to identify the methods used, assessed how effectively they are being used, and made recommendations for those methods most likely to ensure compliance.