The Law Society states that ‘artificial intelligence’ (AI) involves computer systems which can replicate human cognitive functions, and that it includes algorithms detecting patterns in data, as well as applying these to automate certain tasks.
A report from the Deloitte AI Institute and the Deloitte Center for Integrated Research states that this is an opportunity-rich period in the history of AI technology. AI requires organisations to re-imagine how work gets done, and arguably this is how progress is made. It gives machines the ability to learn and continuously develop themselves. This is why it is seen by many as an exciting addition to our lives, resulting in a futuristic world, including the Metaverse.
There are wide-ranging views on AI, with some people worried about the dangers of loss of jobs, or self-driven cars causing accidents, or robots taking over. Whilst all views certainly make an interesting discussion, this article consider a critical element to AI, which is the data protection considerations.
From a business perspective, AI has numerous benefits, such as allowing businesses to operate more efficiently by 'freeing up' individuals’ capacities to direct their focus elsewhere, possibly finding new ways to develop the business.
A unique quality of AI is its ability to predict needs based on the data it gathers, however the more data that is gathered, the more personal data the AI technology is privy to. This begs the question: how is the data gathered kept secure? People are becoming more aware of the protection that the law offers to their personal data. The Information Commissioner’s Office (ICO) states that AI is a priority area because it has the potential to pose a high risk to individuals and their rights and freedoms. It includes in its current areas of focus, privacy and confidentiality.
The ICO states that, as AI systems process personal data in different ways for different purposes, these must be broken down and separated in order to apply the relevant lawful basis for each process. It is also important to note that where a form of processing changes, this should be continually monitored to assess whether an organisation can continue to rely upon that lawful basis. An organisation’s use of AI is also likely to involve relationships with third parties, and this brings with it further risks.
ChatGPT is a natural language processing tool driven by AI technology. Its function is in the name – it allows you to have a chat with it. However, it is also much more than a usual chatbot, as it can provide you with answers to many questions. People have started using it in their jobs, when they don’t know how to perform a certain task and ChatGPT has the solution to that, acting as an experienced individual in the same field. Needless to say, it is an impressive technology and can certainly provide a lot of assistance to users.
The increasing popularity of AI which is being seen will no doubt bring about new laws and regulations to keep up with these issues.
In July 2022 the UK Government released its policy paper Establishing a pro-innovation approach to regulating AI. It describes its proposals as light-touch and wants any new regulation to take account of specific contexts and to be coherent across different sectors, as well as being as simple as possible. For more information on the UK Government’s proposals for regulating the use of AI technology whilst protecting data and promoting innovation, listen to our podcast on this here.
We are eager to see the legal side of what the future holds for AI, and we are sure you are too. We will be closely monitoring this area and will report on future updates.