Artificial Intelligence (AI) systems present advantages and opportunities for businesses thanks to advances in computing power.
For many organisations, AI isn’t just an option; it is imperative for helping companies stay ahead of the game.2
However, while AI models have profoundly impacted how businesses operate, experts have warned of the potential risks.
A team from Marsh came together at the recent Future Unlocked event to address the challenges and discuss how businesses can build cyber resilience.
AI's cyber challenges and opportunities uncovered
Types of AI and limitations
Skip to 0:04:35 in the recording
Artificial Intelligence is the ability of computers to simulate human intelligence. Eric Alter, Corporate Risk and Cyber Engagement Leader, Marsh, explained AI technologies meaning, benefits and limitations:
- Narrow AI performs specific tasks in a limited domain.
- General AI carries out tasks humans can perform.
- Generative AI employs primarily deep learning models to generate new content across various domains.
- The four primary types of AI include:
- Reactive—such as Deep Blue and Netflix recommendations
- Limited memory—the most widely used, learns to make predictions and perform complex tasks.
- Theory of mind—machines acquire decision-making qualities similar to humans.
- Self-aware—with decision-making capabilities and human-level consciousness – this may never appear.
- Generative AI (GenAI) focuses on creating original content, while predictive focuses on future outcomes based on data.
- Used across many industries, including financial services for fraud detection, healthcare to answer questions and diagnose, such as with mammograms, and entertainment.
- Limitations include poor calculations, so it can mislead. Users must ensure AI is used safely and reliably; it’s only a threat if misused.
Skip to 0:23:30 in the recording
Businesses must pay more attention to the use of AI, and most already have the tools to manage the risks.
James Crask, who leads the Strategic Risk Consulting team for Marsh in the UK, discussed five AI risks:
- Algorithmic biases and discrimination.
- Transparency of decision-making.
- Data privacy and security.
- Legal and regularity compliance.
- Ethical and social implications.
- Don’t rely solely on your technologists to manage AI risks.
- Treat AI dangers like any other risk and consider the financial, legal or reputational harm to your organisation.
- Make sure staff are trained to minimise human error and safeguard your organisation.
- Government agencies and regulators have increased their focus with guidelines or new AI regulations imposed. But businesses can’t rely solely on regulation.
AI and ransomware
Skip to 0:38:19 in the recording
There has been an uptick in sophisticated attacks3 , such as voice AI4 , to impersonate CEOs and gain access to systems or enhanced phishing emails.
Traditional controls are the most effective defence from a ransomware attack, which can affect any business.
Amy Mason, a managing consultant who works in Marsh’s Crisis and Resilience team, discussed the changing risks and practical options.
- Ransomware hackers increasingly use AI for financial gain, with Microsoft saying the number of attacks involving data exfiltration has doubled since November 2022.5
- Predictive AI could provide a solution by identifying scams and anomalies. AI-led cyber detection and protection tools will emerge to identify activity and scan for vulnerabilities.
- Get the basics right. Ensure the right specialists are on hand, have excellent backups, and offer effective phishing training for colleagues.
AI Risks and Insurance
Skip to 00:57:30 in the recording
As AI continues to grow, it poses questions about the future of insurance. Will there be a specific market for AI insurance, or should this be included within existing cyber policies?
Joe Latham, Marsh’s UK cyber, media and technology practice leader, discussed the positive advancements and significant challenges for the insurance industry.
- Generative AI could enhance risk assessment and underwriting practices and detect fraudulent behaviour to minimise losses.
- Negatives include ethical concerns and bias, which could result in discriminatory practices when evaluating cyber risks or determining liability.
- Automation facilitated by GenAI tools can lead to faster and more efficient claims processing, improving customer satisfaction.
- Organisations should think about upholding privacy standards. How do you share personally identifiable information (PII) and handle sensitive information?
Rules of using AI
Skip to 1.10:00 in the recording
Use common sense to use AI safely and effectively. Remember:
- Generative AI is prone to human error.
- Many risks are familiar, but new risks may arise.
- AI laws and regulations are evolving, impacting insurance.
Skip to 01:15:30 in the recording to watch our panel’s take on the audience’s questions.
Never miss an event
Sign up for our newsletter to hear about upcoming events and for expert insights, advice and support for you and your business.