Artificial Intelligence (AI) has transformed the way the world functions. However, it comes with many advantages and loopholes in terms of liabilities. For instance, who is responsible if an automatic AI driverless car causes an accident claiming lives or causing hurt? Would it be the one who designed the AI algorithm? The user of the car? The AI driverless car?BURDENED RESPONSIBILITYAI is nothing but an embodiment or sequencing of codes and algorithms. A system that lacks empathy and is solely driven by bites of data. These nuanced areas of autonomy given to AI systems, in terms of machine learning or self-learning, make it difficult to determine whether or not such a system should bear liability (responsibility) or not. Complications arise when such a system has made decisions based on machine learning principles.LEGAL MINDSFor example, if an AI system or chatbot engages in discriminatory practices against individuals around- would it be responsible for causing such a hurt? Advocate Kritika Oberoi, Delhi High Court & District Courts says, “It’s time for businesses to scrutinise and define their accountability structures to ensure the ethical and effective use of AI, a blend of both innovation and responsibility and reliability.”She explains that when it comes to AI systems, accountability usually is a delicate terrain to sail through. This is primarily due to a variety of individuals being involved in the structuring and processing of AI systems.Advocate Kapil Naresh, Technology Lawyer and Transaction Associate at H&B Partners says, “When it comes to the question of vicarious liability (a person being liable for the act of another) in the context of AI systems, it’s essential to understand that many parts could potentially be responsible if something goes wrong.”If a developer designs an impeccable AI system but the end user fails to bear minimal responsibility in adhering to safety mea- sures, would it be right for the former to bear the brunt if things go wrong? “Applying the present system of laws to AI, could be a challenging task,” says Kapil. The present laws are not sufficient to address such nuances. Samridhi Jain, a Global Paralegal and Founder at Legal Technology Marketing shares how the company too, could be held responsible in certain situations. “Failing to provide sufficient guidance on the proper use of the AI system, especially regarding its limitations might result in liability,” she says.However, there is some room for seeking cover under existing laws. Many may choose to have themselves covered under constitutional provisions such as the Right to Equality (Article 14) or even Article 15- Prohibition of discrimination.Samridhi opines that one could invoke ‘algorithmic biases’ as well. A new domain in terms of algorithmic bias. Here, not an individual but an algorithm discriminates against the other! She says, “If an AI system used in lending practices unfairly denies a loan to individuals from certain communities, it could be challenged as being discriminatory against individuals.”POSSIBLE SOLUTIONSWhile AI-related problems are many, one can be more cautious about reading terms and conditions when giving consent to use personal details. In terms of Intellectual property violations, Advocate Kritika opines that implementing comprehensive tracking systems would help identify the origin of a system. “Addressing legal loopholes surrounding AI requires a pragmatic approach that views machine- generated and human-generated elements as being interconnected,” says Advocate Kapil Naresh.Other systems to derive from could be ones like the EU AI Liability Directives which advocate for a risk-based approach and serve as a comprehensive legal framework that sets out harmonised rules for AI-based systems. However, from the Indian perspective, given the nascent boost in AI tech, there are unique challenges that the country still faces.Advocate Kapil says, “We should take inspiration from EU’s AI act, it’s time we do.” India needs to start by first codifying its laws. Kritika says, “The challenges with current laws are that they are opaque, complex and have a huge translation gap between the people who make laws and organs that interpret it.”LONG ROAD AHEADWhile there are a few policies and initiatives put forth from the India AI portal to the India AI Mission of 2024, what these do lack is some kind of enforceability. Mere guidelines while guiding may not necessarily nudge for adequate compliance. While India’s approach to AI governance has been a patchwork revolving around policies, strategies and ethical guidelines, they fall short of binding legal structures. Samridhi says, “These serve as not being enforceable!”
Source link