Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Blog Series
  • Write for us!
  • About Us
Menu

Machines, Middlemen, and Mandates: Vicarious Liability under the Companies Act, 2013

Posted on November 5, 2025November 5, 2025 by Tech Law Forum NALSAR

[This article is authored by Divyansh Bansal and Priyanjali Singh, second-year B.A. LL.B. students at National Law University, Jodhpur. This piece examines how India’s Companies Act, 2013 struggles to assign accountability when corporations delegate critical functions to AI systems and third-party vendors. The authors argue that existing vicarious liability frameworks, designed for human actors, must evolve through statutory reforms to ensure that directors remain accountable even as machines and intermediaries increasingly shape corporate decision-making.]

Who Bears the Blame? Unpacking Vicarious Liability in the Corporate Power Structure

Even in ancient societies, power came with a price: if the pawn erred, the king paid. Vicarious liability is that ancient principle, now dressed in corporate robes. It holds that a person or entity can be held responsible for the wrongful acts of others under their control, even if they were not directly involved or did not intend any harm. Over time, this principle has been extended to the corporate world, where the question arises – should only the company be held liable for misconduct, or should those who direct and manage its affairs also share responsibility? After all, a director is not just a figurehead, they are an agent appointed by the shareholders to run the company, establish its vision, and uphold the aspirations of other stakeholders. With that role comes a wide range of expectations, including exercising independent judgment, supervising key decisions, and ensuring that the company acts responsibly

Reflecting this, Section 2(60) of the Companies Act, 2013 [“CA”] labels directors as “officers in default,” making them personally liable for misconduct carried out by employees or agents acting on the company’s behalf. Other provisions, such as Sections 134, 166, and 447, further outline the statutory obligations of directors and key managerial personnel [“KMPs”], including duties of diligence, disclosure, and good faith. The law implicitly assumes that such key managerial functions [“KMFs”] – like compliance, disclosure and fraud detection are carried out by natural persons capable of bearing legal and ethical responsibility.

But today, with companies increasingly outsourcing key tasks to third-party vendors and relying on AI-driven systems, the question of who should be held liable is getting more complicated. When critical functions are handled by outsiders or machines, it’s harder to pin down responsibility. An AI system tasked with financial forecasting or compliance reporting may perform actions with significant governance implications, yet it lacks legal personality and intent. Similarly, outsourcing agents may execute decisions based on delegated authority, raising questions about agency and control. Adding another layer to this puzzle is the “corporate veil” – the legal shield that treats a company as its own separate person and usually protects directors from personal blame.

These developments challenge the existing corporate governance paradigm, which was built around human actors who have identifiable fiduciary obligations. As AI and outsourced agents increasingly participate in, and even substitute for, human decision-makers, the regulatory framework must address this new kind of accountability gap. In this essay, we are aiming to establish how vicarious liability works under the CA, especially in the age of outsourcing and AI, and why directors must stay alert to ensure accountability doesn’t slip through the cracks as business evolves. We also aim to explore the scope of liability when AI or third-party agents make critical errors.

When Control is Contracted: Legal Implications of Outsourcing Managerial Authority

Outsourcing implies that an activity once performed within an entity’s organizational confines (“in-house”) is moved to an outside, organizationally separate, supplier. Here emphasis should be made on “separate supplier” as this implies that the degree of control over such vendors becomes illusionary and merely regulatory. However, does this independence imply that the company that has outsourced its functions and its directors thereof forsake all their legal liability over such functions?

Although no court has specifically looked into this question, one can find an answer by looking into the tort principle of vicarious liability where the corporate vicarious liability was adopted from. It is a well-established principle in tort that the employer cannot be held liable for acts of negligence of independent contractors. Independent contractors can be defined as “one who undertakes to produce a given result, but so that in the actual execution of the work, (i) he is not under the order or control of the person for whom he does it, and (ii) may use his own discretion in things not specified beforehand.”

Both of these criteria can be easily identified in companies offering outsourcing services as typically, the company outsourcing its functions does not have control over the day-to-day administration of the functions. As such, by virtue of its ‘separate’ nature, outsourcing companies can be labelled as independent contractors and as a result cannot be held liable for acts of negligence of the outsourcing company.

However, what about criminal offences? Does the bar remain the same in both civil and criminal liability?

In S.M.S. Pharmaceuticals Ltd. v. Neeta Bhalla, the Court toyed with the definition of the word “director”, itself, as defined in Section 2(13) of the Companies Act, 1956 and held that “simply by being a director in a company, one is not supposed to discharge particular functions on behalf of a company. It happens that a person may be a director in a company but he may not know anything about the day-to-day functioning of the company.” Following this, the courts have underscored numerous times that only those persons who were in-charge of and responsible for the conduct of the business of the Company at the time of commission of an offence will be liable for criminal action, irrespective of mere positional authority.

Other than this, it has been reiterated in Sunil Bharti Mittal v. Central Bureau of Investigation, most recently, that individual liability for an offence must be clearly established through direct evidence of involvement or by a specific statutory provision.

The third thing of note is to be found in H.L. Bolton Co. Ltd. v. T.J. Graham & Sons Ltd., wherein the court solved the conundrum of “laxity of mens rea” in the case of an artificial person such as a company via doctrine of attribution by comparing a corporation with a human body, with its directors and managers representing the “mind and will” of the organization. These individuals dictate the company’s actions and decisions, and their state of mind is legally treated as that of the corporation itself.

Thereby, to establish criminal corporate vicarious liability, (i) in charge of and responsible for conduct of business, (ii) personal involvement and statutory backing and, (iii) will of directors corresponding to will of the company, are imperative. But in the case of outsourcing companies, since the companies are separate and independent contractors, the directors of the company outsourcing its functions do not have substantial control over the conduct of the business activity being outsourced, neither are they in charge of it, and thereby it cannot be said that the act done by the organisation represents their will. In such a case, assigning vicarious liability would neither be fair nor reasonable.

However, does that mean no liability can be attributed to the directors of the company outsourcing its functions?

That is not entirely true as the Companies Act, 2013 itself through section 166 lays some fiduciary duties on such directors to exercise their duties with due and reasonable care, skill and diligence, exercise independent judgment and act in good faith. If the fault caused by the outsourcing company is due to lack of reasonable care or due diligence by such directors in selection, compliance-check, audits, etc. then, direct liability under the act can be attributed to them. This form of liability is not vicarious but direct, arising from a breach of statutory duty.

This distinction becomes especially pertinent in the age of AI, where directors may authorise the deployment of systems whose functioning they do not fully understand.

When Algorithms Take the Wheel: Corporate Liability in the Age of AI Management?

AI, for legal purposes, is not yet recognised as a juridical or natural person in the Indian law. This precludes its recognition as an “officer in default” or “officer” under the CA, which form the cornerstone of corporate accountability. Section 2(60) of the CA defines “officer who is in default” to include specific human individuals expressly charged with responsibility for compliance. Similarly, Section 2(59) defines “officer” as including directors or any person in accordance with whose directions the board is accustomed to act. Both provisions assume a conscious bearer of duties, capable of legal intention and consequence management. AI, lacking both sentience and legal personhood, cannot be said to act “knowingly,” “wilfully,” or “negligently,” nor can it be entrusted with statutory obligations or held to their breach. Therefore, AI lies normatively and ontologically outside the definitional scope of these statutory categories.

AI does not fall within the definition of “person” under Section 3 (47) of the General Clauses Act, 1897. Consequently, AI cannot independently fulfil or breach obligations under Sections 166 or 134 of the CA. Section 166 provides for the fiduciary duties of directors, including the duty to act in good faith, promote the objects of the company, exercise due reasonable care, and avoid conflict of interest. These duties are not merely formal; but are substantive standards by which corporate governance is judged and enforced. Section 134 mandates the preparation and approval of financial statements by the board and the furnishing of directors’ responsibility statements certifying their accuracy, compliance, and the adequacy of internal financial controls. Both provisions assume an actor capable of comprehension, verification, and moral judgment. AI cannot perform such evaluative or ethical functions, and therefore, any delegation of these statutory tasks to AI systems cannot transfer accountability.

The doctrine of identification, traditionally used to impute corporate criminal liability by attributing culpable mens rea to the “directing mind and will” of the company, presupposes a human actor capable of intention and consciousness. AI, by contrast, is a predictive model: it identifies statistical patterns based on training data and produces outputs accordingly. It lacks self-awareness, foresight, or a legally cognisable “intention.” Thus, AI systems cannot “intend” to commit fraud, misstate financial disclosures, or contravene statutory obligations. This creates a doctrinal rupture in the sense that if liability depends on the presence of a guilty mind, and neither the AI system nor any individual possesses such mens rea in relation to the AI’s acts, can the liability simply dissipate?

In earlier corporate hierarchies, we could trace the people responsible for a certain decision, simply by going up or down the hierarchy. However, with AI-generated decisions, particularly those developed through unsupervised learning or adaptive algorithms, there is often no single accountable human actor who can be said to have “authored” the decision. This results in a diffusion of responsibility, diluting accountability and raising serious corporate governance concerns.

The risks can arise due to the AI itself being poorly trained, as it may inadvertently violate regulatory thresholds due to inaccurate classification or reporting. Similarly, algorithms developed with biased training data could result in discriminatory outcomes, particularly in hiring or lending decisions, creating exposure under anti-discrimination laws and ESG mandates. The companies have started deploying AI to filter out candidates with gaps in their resume. This is particularly discriminatory against women applicants, who may have to necessarily take maternity leave after childbirth. This is also discriminatory against disabled people who may have had to take time off to receive necessary treatment. When companies adopt AI systems, without having proper layers of human oversight in place, then autonomous processes are able to take material decisions without having proper accountability. In each of these scenarios, the locus of legal liability becomes obscured.

Yet, legal obligations do not cease to exist simply because the actor is non-human. The legal order demands attribution, and where the system produces harm, someone must bear responsibility.

In recognition of these dilemmas, global regulatory regimes have adopted a redistributive approach to accountability. The EU has laid down the EU AI Act, in which they have imposed obligations on “providers” and “deployers” of high-risk AI systems, including documentation, transparency, and human oversight mandates. Similarly, the US FTC has issued guidance stating that algorithmic unfairness or deception by AI tools will trigger liability on the deploying corporation, not the tool.

So, the EU and the US have redistributed liability between developers, deployers, and corporate overseers. In India, however, this redistribution is yet to be codified. But one thing is clear: autonomous tools cannot absolve human liability. We cannot and should not deny restitution to a party solely because the perpetrator was an AI model.

The Way Forward

The CA has solely dealt with, and imposed liability upon, human actors – which is in stark contravention to the fact that algorithmic systems are being increasingly relied upon to take decisions and outsourcing companies are being delegated key managerial functions. This necessitates an urgent recalculation of accountability.

The first step for dealing with the AI dependence, would be an institution level recognition of accountability, which can be done via statutory inclusions. This could take the form of an express provision or Explanation under Chapter XI of the Companies Act, which makes clear that the delegation of managerial discretion to an AI system does not dilute the duties, responsibilities, or potential liabilities of directors and KMPs. This is in line with the principle of not holding the AI itself accountable, but preventing it from being used as a tool to diffuse or escape responsibility.

Secondly, AI risk audits and impact assessments need to be statutorily mandated for companies. Standards of fairness, transparency and reliability should also be statutorily established to facilitate their conduct. Corporate bodies should then be required to ensure that these assessments are conducted as per the standards, and their results are published in the company’s annual report, thereby enhancing shareholder confidence and reducing informational asymmetries. The companies should be mandated to disclose their AI deployment protocols in their annual board reports under Section 134 of the Act, which presently only requires the directors to disclose formal evaluations of the board’s functioning and risk management frameworks. Extending this requirement to include “technological due diligence” would ensure that directors explicitly disclose the scope, purpose, and safeguards associated with AI use in KMFs and compliance areas. Such a move would create ex ante accountability and enable regulators and shareholders to evaluate whether the directors acted with adequate foresight and care.

Thirdly, regulatory oversight must also evolve in tandem. Institutions such as SEBI and MCA should consider issuing sectoral guidelines on AI usage in corporate governance. These guidelines could mirror the OECD AI Principles or draw on the EU AI Act’s risk-based approach to classify algorithmic systems used in corporate contexts. The focus should again be on emphasising that AI actions do not absolve humans of their statutory obligations. Regulators must check compliance of these guidelines periodically.

Moreover, the companies should be statutorily required to constitute an AI Governance Committee at the board level (similar to audit or nomination and remuneration committees). This Committee would oversee the ethical deployment, compliance implications, and organisational integration of AI systems. Their existence would signal a structural commitment to accountability and provide a formal node for dialogue between technological innovation and corporate governance.

For regulating the outsourcing sector, the due diligence requirement for directors, which is substantially flimsy and subjective, needs to be solidified. As such, inspiration should be drawn from the IRDIA and RBI guidelines for regulating outsourcing via proper definition, defining non-delegable activities, laying clear duties of the company outsourcing, etc. Legally mandated safeguards must be introduced to ensure directors can credibly claim they acted with reasonable care. This includes pre-outsourcing due diligence on vendor compliance, infrastructure, and ESG standards, followed by ongoing oversight through certifications, red-flag systems, and site inspections – especially for high-risk functions like data handling or payroll.

Conclusion

Corporate governance in the modern era faces twin challenges of growing algorithmic autonomy and organisational outsourcing. The classic paradigm of vicarious liability, founded upon human control, intention, and vigilant oversight, is being stretched to its breaking point. The Companies Act, 2013, grounded in an era where accountability was anchored in identifiable human actors- “officer in default,” “directing mind and will,” is now faced with a reality where KMFs are either outsourced beyond traditional bounds or entrusted to autonomous AI systems devoid of legal personhood or moral agency.

This does not imply a liability vacuum, but rather a shift – from vicarious to direct accountability, anchored in proposed statutory duties under Sections 134 and 166. Directors cannot abdicate responsibility simply because the agent is artificial or external. Their fiduciary obligations require anticipatory governance, continuous oversight, and a proactive posture towards technological and structural delegation. The legal regime must evolve to recognise emerging lacunae without diluting foundational accountability norms. Efforts must be made to ensure that human oversight remains at the fulcrum of corporate governance at all times. Otherwise, as machines become opaquer and intermediaries more fragmented, corporate governance risks becoming both hollow and ungovernable.

Categories

Recent Posts

  • U.S. Visa Surveillance: The New Panopticon and its Privacy Implications
  • Machines, Middlemen, and Mandates: Vicarious Liability under the Companies Act, 2013
  • The Artificial Intelligence Conundrum: Balancing Innovation and Protection in Mental Healthcare
  • Behind the Avatars, Real Voices Cry: Can Indian Law Catch Up with Virtual Sexual Violence?
  • Betting on Balance: India’s Online Gaming Dilemma
  • Part II | AI CHATBOT MY PERSONAL THERAPISTS!!!
  • Part I | AI CHATBOT: MY PERSONAL THERAPISTS!!!
  • Promotion in Name, Prohibition in Practice: Reality of India’s Online Gaming Law
  • Part II | SET LAWS, SHROUDED GAPS: Evaluating the DSAR in wake of DPDPA from the perspective of FSPs
  • Part I | SET LAWS, SHROUDED GAPS: Evaluating the DSAR in wake of DPDPA from the perspective of FSPs

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
  • Twitter
  • LinkedIn
  • Instagram

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme