Is it time to recognise AI inventorship in patent law?

Whether patent law should recognise AI inventorship has stemmed from some recently filed patent applications where the applicant named an AI system as an inventor. DABUS is an artificial intelligence (AI) system created by Dr Stephen Thaler. Thaler and his team filed a number of patent applications in different jurisdictions, which claimed that DABUS, not a human being an inventor. While the UK[1] and US[2] courts rejected Thaler’s claim, the Australian counterpart sided with him.[3] The patent application is pending in Brazil, Canada, China, India, Israel, Japan, New Zealand, Republic of Korea and Switzerland. This has raised an important question of whether it is time to recognise AI inventorship? This post suggests a NO answer for moral and legal reasons.

Hello, human being!

Currently, most national patent laws and international agreements do not recognise AI as an inventor. The dominant view is that inventorship can only be granted to a human being, not the machine. Inventorship will tie with legal responsibilities, enforcement, and litigation, which a human can only perform. Under the UK Patents Act 1977, inventors are entitled to compensation from their employers under certain circumstances.[4] Accepting AI inventorship will go too far at this moment as we will move beyond viewing the AIs as a tool but as natural and legal persons. That would represent a paradigm shift in revisiting the concept of human beings and the interrelation between technology and society. That will create a moral dilemma.

Even giving AI a specific legal status is still too problematic for patent laws and other areas of laws such as contract or tort. Such consideration comes with challenges. Can AIs act based on their legal personality? Can they become contracting parties and thereby enforce their rights and obligations? Can AIs be “personally” liable and subject to litigation? How does an AI system support such liability, and how this could be done? Should it be rewarded for its work? Shall we tax its earnings if the system generates enough revenue for tax purposes?

While this submission will not detail the definition of AI, it is necessary to briefly mention the most advanced form of AI – autonomous intelligence. Under an autonomous AI, processes automatedly generate the intelligence that allows machines, bots and systems to act independently without human intervention. With the ability to self-learn, behave autonomously, and make individual decisions, an AI can cause unintended effects to harm humans or other things.

There are examples of collisions caused by self-driving cars[5] and humiliating chatbots[6] that turned racist and engaged in hate speech. In those situations, the producers of such AIs took action to fix the unplanned consequences. That is in line with current legal frameworks. Article 12 of the United Nations Convention on the Use of Electronic Information in International Contracts states that a person (whether natural or legal) acting on behalf of a programmed computer is ultimately liable for any message generated by the machine. According to this interpretation, a subject when using AI as a tool, whether at fault or not, must compensate for damage caused by AI. Under EU Council Directive 85/374/EEC, a producer shall be liable for damage caused by a defect in his product.[7] Consistent with the EU Product Liability Directive, the UK Consumer Protection Act imposes strict liability on a producer for damage caused by a defective product.

While current laws in other areas seem not to accept AIs as human beings, are we ready to do so within the concept of patent laws?


[1] Thaler v The Comptroller-General of Patents, Designs and Trade Marks [2021] EWCA Civ 1374

[2] https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2021/09/thaler-v-hirshfeld-decision.pdf

[3] Thaler v Commissioner of Patents [2021] FCA 879Federal Court(Australia).

[4] UK Patents Act 1977, Sec. 39-43

[5] https://electrek.co/2016/05/26/tesla-model-s-crash-autopilot-video/

[6] https://mindmatters.ai/2018/08/the-new-politically-correct-chatbot-was-worse/

[7] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, Article 1.

Leave a Reply