Artificial intelligence in the financial services industry: opportunities and risks
Artificial intelligence (AI) holds major potential for financial services providers, which already use the technology in their decision-making processes. But the use of AI also carries ethical risks and business challenges in a still-evolving regulatory environment.
“Financial institutions that use AI are faced with data protection challenges and ethical dilemmas around what kind of data they can collect, how to process the data and obtaining informed consent from their clients,” explains Marc-Antoine Dilhac, associate professor of ethics and political philosophy at the Université de Montréal. This past spring, he participated in a consultation on AI in financial services organized by the Autorité des marchés financiers (AMF).
There are provincial and federal laws governing the protection of personal information, but no specific provisions regarding AI. “That could change soon with two new bills on the table in Ottawa and Quebec City,” says Charles Morgan, national co-leader of McCarthy Tétrault’s Cyber/Data Group.
The pieces of legislation are Bill 64 in Quebec and federal Bill C-11. “Both bills introduce regulations on automated decision systems,” Charles Morgan explains. The obligations are twofold: organizations will be required to provide information in their privacy policies on which data these systems will use, and they must disclose how the systems will be used in decision-making.
An opaque process
Transparency is emerging as an enormous technological challenge for AI due to the “black box” problem. Artificial neural networks, which are commonly used in AI-based decision-making processes, are capable of processing a phenomenal quantity of data, like text, images, or data captured by sensors or a smart watch.
“We throw all this raw data at an AI system, and it deduces a decision-making rule, which is often more precise than traditional rules," explains François Laviolette, professor in the Computer Science and Software Engineering Department at Université Laval and Canada-CIFAR AI Chair. “However, you can never really understand how the neural network words. It’s too complex.”
This creates a major problem for financial services providers, which are required to clearly explain the reasons behind their decisions. Clients have the right to know why they have been approved or turned down for a particular loan or insurance policy, and how the financial institution determined their premiums. The capabilities of AI carry two additional risks for insurance providers. “AI could lead to hyper-segmentation, calling into question the principle of risk pooling,” says Marc-Antoine Dilhac. “Some individuals could wind up paying much more than others, or be denied access to insurance.”
Insurers are also pairing AI with connected objects to introduce “nudging” mechanisms to steer the behaviour of their clients. For example, an app on a smart watch or phone might suggest goals for exercise or sleep, send notifications or measure physical activity.
“There’s a risk of infringing on users’ freedom to make their own life choices,” Marc-Antoine Dilhac warns. “For consent to be truly free and informed, users must have other options. If they are forced to accept these tools in order to access insurance, or if there are high premiums for refusing to use the tools, then consent is not free.”
Biased tools
Another challenge is that despite what some might think, AI is not neutral. “Human beings who program algorithms may consciously or unconsciously introduce bias,” explains Golnoosh Farnadi, assistant professor at HEC Montréal and Canada-CIFAR AI Chair. “In addition, these algorithms learn and make decisions using the data you supply them with. If this data is biased or discriminatory, problems may arise.”
Solving the problem of bias in AI is complex. Biases are always defined according to a variable, like age, gender, ethnic or cultural origin, or place of residence. In theory, two decisions affecting two identical client files should not differ just because one of the clients is a woman, or Aboriginal, or residing in a neighbourhood considered to be richer or poorer than average.
To avoid situations like these, insurers exclude certain types of information from their automated decision systems. But they’re going down the wrong path, argues François Laviolette. “Algorithms can easily make correlations with other variables, like first name, last name or even occupation to infer a person’s gender or ethnic or cultural origin,” he warns.
So what’s the solution? François Laviolette believes that insurers should first define which variables should not generate bias. For example, they might conclude that age, but not ethnic origin or gender, might be a variable that should be taken into account in the decision. “They should still include all these variables in their decision-making processes, but carry out audits afterwards to ensure there’s no bias in the decisions,” advises François Laviolette.
The takeaway for financial services providers is to approach AI with caution. Businesses risk investing in tools that meet current regulations but fall out of line with evolving rules over the coming years. On April 21, 2021, the European Commission adopted a new framework on AI. Among other measures, it bans “systems or applications that manipulate human behaviour to circumvent users’ free will.” It also classifies AI technologies used in assessing credit risk as “high-risk”.
“The AI sector has seen vigorous growth,” Charles Morgan admits. “But the use of AI in financial services also presents risks for businesses and consumers. It’s best to be cautious.”
References
This is an article from the CSF Magazine, which you can view (in French) here.
Read also
INDUSTRY'S EVENT
Webinar
L’Autorité des marchés financiers vous offre gratuitement une formation en ligne vous permettant de mieux connaître les ressources disponibles en éducation financière et les principaux organismes et programmes offerts au Québec.