Flo Hastings
Digital Marketing Specialist|Kerv Group
Have a question?
Get in touchPublished 06/12/24 under:
AI is making waves across various sectors, driving remarkable progress. But with these leaps forward come serious concerns about security, privacy, and ethics.
So, let’s dive into the essential aspects of responsible AI and cybersecurity, drawing insights from a recent episode of the Learning Kerv podcast, featuring Rufus Grig, Chief Technology Officer at Kerv, Francis Thomas, Chief Sustainability Officer at Kerv and Tony Leary, Chief Information Security Officer at Kerv, by answering some burning questions around Ethical and Responsible AI.
What is the difference between Info Security and Data privacy, and why the distinction important?
Security and data privacy are often used interchangeably, but they hold distinct meanings and implications.
Tony Leary explains this difference:
“Security is about keeping data safe, while privacy relates to our ability to exercise choices on how data is used and processed.”
In the context of AI, ensuring both security and privacy is paramount, as these systems handle vast amounts of sensitive information.
What are the main concerns when it comes to AI in Data Privacy?
The rise of generative AI tools like ChatGPT has introduced new privacy concerns. Tony highlights the risks associated with using public AI tools, commenting, “There’s always a risk that you share data about yourself or your company with a service, and that service’s privacy policy allows them to use that data in a model.” This could lead to unintended exposure of confidential information, emphasising the need for caution and awareness when using gen AI tools.
Sharing any data online comes with risks. Every website uses cookies, and some deploy them in the hundreds. Tony notes, “If you’re not paying for the product, you’re the product,” which is broadly true for many internet services. This means that personal data can be used to build profiles for targeted advertising, unless users take steps to disable cookies and protect their privacy.
What are the main concerns when it comes to Info Security?
AI systems, while powerful, are not immune to security threats. The ability of AI to aggregate and analyse large datasets can be exploited by malicious actors.
Tony warns,
“AI is a tool that can be used for both good and malicious reasons. Bad actors can use AI to craft highly personalised phishing emails or build their own malicious models.”
This dual-use nature of AI means it must be used with robust security measures and vigilant monitoring.
Is there any relevant regulation?
Regulatory frameworks are evolving to address the challenges posed by AI. The EU AI Act, which came into force in August 2024, aims to ensure safe and trustworthy AI practices. Similarly, the NIST framework in the US provides a comprehensive risk management approach for AI, highlighting the global efforts to regulate AI responsibly.
Tony sheds light on this by adding, “Given the extraterritorial nature of EU regulations, organisations in the UK and beyond will need to comply with the EU AI Act.”
In terms of Environmental Impact, what can we do about the significant increase in power consumption that Gen AI is causing?
The environmental impact of AI is a growing concern, particularly due to the energy consumption of data centers. Francis Thomas points out,
“Data centers consume between 1% and 1.3% of global energy demand, and this is on the rise.”
To deal with the impact of using Gen AI professionally, organisations must adopt sustainable practices, such as optimising model efficiency and selecting environmentally responsible data center locations.
How can we be sure AI is Ethical?
Ethical considerations are crucial in the development and deployment of AI systems. A particularly concerning ethical issue with AI systems is biases.
Francis emphasises the importance of addressing biases in AI models as he states, “We have to be really careful about the inputs we use to not perpetuate societal biases.”
Gen AI can also be used in many ways that raise ethical concern.
For example, the use of AI for disinformation campaigns or to perpetuate belief in misleading information, this highlights the need for ethical guidelines and accountability measures to be put in place for all gen AI tools.
How do you ensure adherence to ethical guidelines?
Ensuring adherence to ethical guidelines involves transparency, accountability, and fairness. Organisations should test for bias, audit for bias, and actively work to mitigate it; you can’t simply put out guidelines and think the job is done. Microsoft and NIST are examples of existing frameworks that can help guide organisations to use responsible AI practices.
Tony advises, “Think about how you use AI now and read the ICO resources for help.” Fran adds, “Pick the right infrastructure partner that is leading by example, both environmentally and socially.”
As AI continues to transform industries, it is imperative to approach its development and use with a focus on security, privacy, sustainability, and ethics. By understanding and addressing these concerns, we can harness the power of AI responsibly and ensure its benefits are realised without compromising our values.
For a deeper dive into these topics, listen to the latest episode of the Learning Kerv podcast, ‘Responsible AI & Cybersecurity: What You Need to Know’, and hear more from experts Tony Leary and Francis Thomas and learn how to use AI responsibly.
Have a question?
"*" indicates required fields