Connect with us

Hi, what are you looking for?

Metaverse CapitalistsMetaverse Capitalists


The hidden dangers of blindly embracing AI in your business

It’s no secret that AI is a trending topic hot on many organisations’ agendas and strategies. However, with the release of ChatGPT, and Machine Learning continuously evolving, cybersecurity service providers, ramsac, are advising businesses not to blindly jump in with AI.

From malicious code to leaked data, using an LLM (large language model) could be detrimental to your organisation if used improperly.

The dangers of AI for businesses

With the media storm around ChatGPT, the site currently has around 100 million users and is visited 1 billion times every month. As an LLM it uses deep learning to provide answers to queries, statements or requests in a human-like manner. So, how is this dangerous?

LLMs rely on accessible data from the open internet to inform queries and responses for uses. Of the 100 million users, and the billions of requests already logged on ChatGPT, it’s possible for the organisation running the LLM to learn from this to store data for future responses. Think about it, ChatGPT doesn’t ask for your permission before use. As LLMs are unable to decipher confidential information against readily available information, company secrets or intellectual property could be leaked and lost.

What should businesses do when using LLMs?

– Avoid using public LLMs for business-specific tasks or information, such as reviewing redundancy options
– Use an LLM from a cloud provider or self-hosted as this is a safer option
– Consider the queries and requests before submitting them to LLMs as it’s possible for this information to be hacked and leaked
– Avoid including sensitive information on public LLMs, such as confidential data
– Submit business critical queries on private or self-hosted LLMs only
– Ensure up-to-date cybersecurity monitoring is enabled and active so breaches and threats can be detected

Without proper consideration for the queries and requests posted, information can be carelessly leaked which could result in major disruption and damage to an organisation. Unfortunately, it’s possible for LLMs to be hacked, exposing all queries alongside sensitive information. Around 39% of UK businesses were victims of a cyber-attack in 2022 and this is only set to rise in 2023 if minimal action is taken to protect businesses.

How do AI and LLMs affect business cybersecurity?

As technology develops, cybercriminals are also capable of evolving their methods too. Although the full extent of cybercrime is yet to be realised, it’s clear that more sophisticated phishing scams will most likely arise from LLM usage.  It is currently the most common form of cybercrime, with around 3.4 billion emails sent every day. Cyber attackers will be able to script and automate communication without spelling errors, making them less suspicious.

Bog standard anti-virus software is now redundant, especially as threats continue to adapt, evolve and learn. That’s why an always-on approach is necessary. Cybersecurity monitoring, running 24/7, is vital to tackle increasing threats and the sheer amount of event data and trends occurring online. Without proper consideration before using AI and LLM, it could put your business at risk.

Read more:
The hidden dangers of blindly embracing AI in your business

    You May Also Like


    In the UK, the care sector is under incredible strain, it’s good to know there are people working hard to address the issue. One...


    With the increased threat of industrial strike action looming across the UK, we consider whether a force majeure clause can strike the right chord...


    On January 10, the French government announced plans to raise the retirement age from 62 to 64. The change would mean that after 2027,...


    Facebook is being challenged in the High Court over the personal details it collects on users in a case that threatens the company’s business...

    Dislaimer:, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

    Copyright © 2024 | All Rights Reserved