By Jeff Watkins on Information Age - Insight and Analysis for the CTO
Since November 2022, we’ve not been able to move for the wall-to-wall media coverage that generative AI has, well, generated (no pun intended!). It’s no understatement to say that the touch paper has been well and truly lit when it comes to AI. In fact, from our current vantage point, it’s hard to believe that AI winters were even a thing back in the ’70s and ’80s.
In a race to catch the speeding bullet train of opportunity that is generative AI, businesses need to be cognizant of risks too — after all, although not a new tech, the application of AI’s generative strand is still in its infancy.
In fact, at the time of writing, the US Federal Trade Commission (FTC) is probing Microsoft-backed ChatGPT creators OpenAI about the reliability of the information its platform produces, and what knock-on effects the tech presents for user privacy and reputational risk. It’s also the week we saw US comedian Sarah Silverman start legal proceedings against OpenAI for infringement of copyright.
The technology could well be hitting some short-term friction, and organisations would do well to not only think about the possibilities the technology could unlock, but also consider some of the risks it may present too.
Yoshua Bengio – ‘Powerful tech will yield concentration of wealth’ — Professor Yoshua Bengio, one of the godfathers of AI, on which sectors will be revolutionised by AI, the need for tighter regulation, and whether AI poses an existential threat.
A brief overview: minimising the risks
Given generative AI’s power, eradicating risk completely will be a flawed strategy akin to the plight of Sisyphus in Greek Myth. However, there is lots that organisations can do to minimise risk.
Investment in the technology itself will be key to understanding the potential risks and opportunities it can open up. This almost sounds counterintuitive, but if you want the upside of the tech, then you need some skin in the game to start with. This isn’t a tech for mere observers; it’s moving at pace and will only benefit those that engage and evolve their operations by harnessing it. Those that don’t will likely wither on the vine.
Secondly, organisations need to keep closely abreast of the legislative landscape in their own nation, as well as in other countries. The US, Europe, the UK, and China are all charting slightly different regulatory positions when it comes to the technology — a potential minefield for organisations that have global footprints.
With global governments playing a game of ‘regulatory catch-up’ currently, it’s important that decision-makers understand their own risk profile from a number of competing perspectives — for example, the reputational, data / GDPR, and financial risks. They also need to think about the specific risk resolutions available to them — in particular the monitoring, acceptance, mitigation and options for the avoidance of risk. Again, it boils down to what a company is comfortable with.
To help, companies should look to invest in learning and development, building relationships with AI communities and seeking out expert third parties when it comes to building out capability in this ever-evolving field of tech.
Built-in prevention methods
So, what practical measures can companies take to monitor the risks of generative AI?
Well, firstly, everybody should be updating their IT policies to include generative AI usage, but also looking at general information governance policies and training.
Not only are operational logging, monitoring and alerting key to understanding how the risks are materialising in the real world, this information also needs regular updating into a Security Information and Event Management (SIEM) system too. In fact, AI can be of further help here around gauging sentiment and supporting pattern analysis. Regular vulnerability testing will also help ensure any AI-enabled services aren’t attacked through the supply chain or zero-day exploits.
Understanding how data is being used — limiting it where possible, i.e. moving to EU-based OpenAI instances on Azure — or stripping anything sensitive before it goes out is also a huge priority action for AI-enabled companies.
Organisations also need to make sure their digital estates are well covered with things like data loss prevention (DLP) technology — helping them to understand where data is or isn’t going. It would be a huge fallback if and when an AI-enabled service goes wrong.
Operational solutions
When it comes to recognising the potential risks relating to generative AI, there are a number of things organisations should be keeping tabs on too.
Firstly, keeping an eye on how their systems are being used, by rolling up topics, attacks and other exploits to understand the moving threat landscape will be key — along with keeping warning thresholds low for anomalous events.
Ensuring all AI-augmented platforms and services have a dedicated ‘kill switch’ with the ability to revoke keys and other methods of access will become ever more vital as we advance to peak GenAI. For more fundamental problems, companies need to be prepared with backup solutions also to give them the time and space needed for remediation work.
Social media will also be a useful tool in the monitoring of AI-enabled services. It’s often a great yardstick of how a service, function or platform is performing in the market, so keeping a watch on service and keywords after a big product launch is always a good idea — especially when it comes to picking up any AI responses that break ethics or are reputationally damaging.
Providing access to the latest AI-related news on the underlying technologies they’re using for any engineering teams is another preventative measure you can put in place. This will support in the battle to quickly spot any upstream problems, allowing engineers to proactively restrict affected services as required.
Writing your company’s own ChatGPT policy — Giving your staff unfettered access to ChatGPT could have disastrous results, especially when it comes to unwittingly releasing confidential data. Here are some pointers as to how to write your company’s own generative AI policy.
All aboard the GenAI bullet train
Based on its current trajectory it’s clear that generative AI is going to transform the world as we know it. In just eight months, it’s already made great strides in bringing AI to the people.
What’s more, generative AI isn’t for spectators who want to play catch-up at a later date. The reality is that those that start their ‘GenAI journey’ now will almost inevitably be the ones that will secure longevity. However, these pioneers still need to treat the technology with due respect. All applications of generative AI need to have the opportunities and risks in the front of mind at all times. Failure to do so could be catastrophic for organisations and their customers. As a result, mismanagement is not an option.
Jeff Watkins is chief product and technology officer at xDesign.
Related:
Transforming chatbot technology with GPT models — Tim Shepheard-Walwyn, technical director and associate partner at Sprint Reply, spoke to Information Age about how businesses can drive value from chatbot technology powered by GPT models.
The post Mitigating the organisational risks of generative AI appeared first on Information Age.