Dealing with intellectual property issues presents a challenge, of sorts, for drug developers. The last in our series. 

There is no doubt that artificial intelligence is greatly influencing the field of drug discovery. For the past several days we’ve heard from a number of scientists, from the academic and commercial sides, who are using or developing AI tools for drug discovery research. With these discoveries come ethical considerations. Last week, John Mitchell, a theoretical chemist at the University of St. Andrews, talked about the challenges of privacy and job attrition that AI presents. Today, in our final Q&A, we discuss some of the legal challenges over who owns the science withTakeshi S Komatani. Dr. Komatani is with the law firm Shusaku Yamamamoto, where he is currently the principal of the chemical/pharma and bio/life sciences groups. Dr. Komatani received his PhD from the University of Tokyo, Japan, and his LL.B. from Keio University, Japan and qualified as a litigation-certified patent attorney before the Japanese Court and the Japan Patent Office, and as a Japanese pharmacist (chemist) with a certificate of Kampo and natural medicines specialist qualification. He is also a member of AIPPI, and is a Vice Chair of TRIPS Standing Committee. He was a researcher at F Hoffmann-La Roche in Basel, Switzerland. Eureka asked Dr. Komatani for his take on several of these over-arching challenges that AI research presents. Here are his edited email responses.

Eureka: Can you explain why AI-enabled technologies have become such a concern for those who deal in IP?

TK: There are at least two aspects in relation to AI-enabled technologies—subjective and objective issues. 

Subjective issues are related to who owns the intellectual property right (IPR) when AI is involved. This is related to the issue: who is the inventor of an invention “invented” by an “AI”. This also concerns who can and should own exclusive rights such as patent, copyright and the like based on the subject matter (inventions, copyright works and the like) “created” by an AI. Many stakeholders are involved; “human” “inventors, “AI”-creator/developer, “AI”-owner, “AI”-related patent holder, learning data holder, and the like. In this regard, in an extreme case, e.g., according to the literal interpretation of the Japanese Patent Act, an invention created solely by an AI cannot be protected by the patent act and rather goes to the public domain. Although such an “AI” autonomy-type invention has not yet come into the real world, there are quite a number of discussions around the world, since copyrightable works are actually created solely by an “AI” such as music, etc. This topic has been discussed in many international conferences and academic journals. For example, in 2018 General Assembly and the Council Meetings of the Asian Patent Attorneys Association, there was a session to discuss this matter, and therein a number of member countries stated that “AI” users should be eligible for patent holders in such a case, although a number of different opinions were presented.

With respect to objective issues, there are a number of complicated issues. In this regard, a number of patent and intellectual property offices discuss what can be eligible as patentable subject matter. For example, in the US, after the Supreme Court decision in relation to Alice (the Alice case), criteria to be eligible as a patentable subject matter became stricter and thus many information technology (IT) related inventions are subject to invalidation. The same should be applicable to the AI-related technologies. On the other hand, in many jurisdictions, there are discussions how and what extent patent specification/description should be described (this is also called “written description requirement,” “sufficiency,” and/or “enablement requirement”).  As such practitioners are struggling to prepare a reasonable and robust specification/description to secure patent rights to AI-related technologies. With respect to copyrightable works such as those related to experimental data or the like, things are more complicated. This is because copyright does not require registration. Therefore, no one can actually identify who owns what.

Last but not the least, “data” itself is also a big issue in relation to AI-related technologies. In this regard, the most recent G20 summit held in Osaka (actually, where our office is located!), discussed and proposed launch of ‘Osaka Track’ framework for free cross-border data flow. We should watch what framework will be created for such cross-border data flow from IPR point of view.

Eureka: Patents are generally granted when a compound is new and inventive, but it’s a debatable point that a novel compound discovered by an AI algorithm was “invented. So how might AI technologies transform the rights and obligations of patents and patent-holders?

TK: Under Japanese law, such subject matter “invented” by AI is not subjected to patentable subject matter, since to be patentable subject matter needs to be created by natural human. This is more or less the same in the US legislation. Any “inventions” “invented” by “AI” would belong to the public domain. However, this needs extensive arguments and needs to be debated in detail from the respective field of professionals and expertise.

Eureka: What are some novel ways that companies are effectively protecting their AI-related innovations?

TK: Usually such innovations are well protected by means of trade secrets. In some cases, patents covering such innovations are prevailed. For example, the Japan Patent Office published guidelines regarding AI-related technologies on how to patent and its standard for patentability. Other offices also published similar guidelines. Therefore patents are also one of the measures to protect your AI-related ways. This looks old but novel in terms of how to prepare patent applications, since the standard is relatively new.

In relation to completely new legislation, Japan has introduced systems to protect data—Limited Provision Data—system to protect valuable data, as a new section within the Unfair Competition Prevention Act. This amendment to the law was enacted on July. Further, as mentioned above, the G20 summit held in Japan declared that a new protection system will be formed to protect data transfer across the borders. Therefore such a protection system for “data” itself is one of the most important novel ways to protect AI-related innovations.

Eureka: How do you think AI will affect drug discovery jobs? Will we see the end of chemists?

TK: I do not think so. However, future chemists will be required to learn much more about AI technologies and surviving chemists should be able to “use” AI not “be used by” AI.

Eureka: What do you think are the biggest challenges in using AI to help discover drugs? 

TK: Management of AI. AI is still in the process of development and sometimes it behave unexpectedly inefficient or immoral. Therefore, as I mentioned in previous question, human beings must be able to “use” AI rather than “be used by” AI.           

Eureka: What do you think will be the “Next Big Thing” in the application of AI in drug discovery?

TK: AI autonomy. In the current AI technology, AI is still dependent on human beings to some extent. If autonomic AI appears, the world will be revolutionarily changed.

Eureka: Robots are ubiquitous in entertainment. Who is your favorite robot?

TK: I do not have any preference. I like human beings.

Thanks for tuning in. This conclude our series on AI in Drug Discovery  You can find our entire series here