"Artificial Intelligence (AI) in Pharmacovigilance:
do we really need it…?"
Part 1: The Technicalities
“Artificial Intelligence” is one of the most used buzzwords in these days.
However, there seems to be a lack of clarity about the meaning of terms such as “artificial intelligence (AI)”, “machine learning (ML)”, “data science (DS)” and “office automation (OA)”, which should not be used interchangeably.
Pharmacovigilance (PV) represents a very interesting field in this regard, since it poses some unique technical challenges.
Just to give an example, case processing requires a mix of administrative and repetitive tasks (such as data cleaning and form filling) and tasks that require a high level of experience and specialization (such as medical reviews and signal detection).
Technology now offers a wide spectrum of solutions, which go from already available “simple” procedure automation, allowing significant improvements in efficiency and quality, to sophisticated Natural Language Processing (NLP) solutions, which have shown interesting results but are not yet fully operational.
This is the first part of a two-part article. In the second part we will address some of the open issues, of organizational, social and regulatory relevance, concerning the implementation of AI- and ML-based technologies in our domain.
You can read the full article at this link.
Part 2: When Machine meets Man
Part 1 of this two-part article gave an overview of the different approaches used to tackle the specific challenges posed by an “Artificial Intelligence (AI) based” approach to Pharmacovigilance.
These approaches range from office automation (OA) to Natural Language Processing (NLP).
Recent developments in the domain of PV have also shown that the landscape is evolving quite rapidly.
Social networks, for example, went from “indispensable source of knowledge” to “secondary and optional” in a matter of months.
In addition, PV has some very specific needs and the usual parameters used to gauge the adequacy of a machine learn (ML) based system – such as precision and accuracy – may not be sufficient.
The human factor should not be overlooked, either.
The implementation of automated systems poses in fact some very relevant ethical, management and legal questions that are just beginning to be addressed by stakeholders and regulators.
Examples of these questions are the final liability in case of problems, the presence of incorrect (or willfully misleading) information and the impact that these new technologies will have on the workforce.
A relevant number of PV activities can benefit from technologies that are already available and operational, with important increases in efficiency and quality.
The most sensible approach, already investigated or adopted by some companies, calls for the re-analysis of company processes and for the gradual modular automation of the ones that could benefit most, essentially the most time-consuming, repetitive and error prone.
You can read the full article at this link.