Our Technology

pitchdeck_TEMP

Our
background

A bit of history of computational drug design (and why we are different)

Imagine you have an artificial intelligence capable of designing a new drug from scratch. You have surely already done it. AI is a very fashionable theme that has generated great expectations. But, although many battles have been won, the overall result is unsatisfactory.
Unrealistic expectations about the use of computers for drug discovery are not new. They are unsatisfactory for at least 30 years, since computational methods (CM) started promising a new way of doing pharmaceutical research.

Why?
Before AI, CMs were developed to solve wrong models. For example, the key-lock model is a gross simplification of the ligand-receptor interaction.
The CMs solved the problem very well (the level of precision of molecular docking is very high) but it is no help other than to make a rough screening. Chemists and biologists hypothesized a wrong model and computer scientists gave a perfect answer to a wrong question.
The models are continuously improving and for years we dreamed of solving the problem of finding new drugs by relying on more and more complex but always incomplete models.
It’s a never-ending story. Many other aspects have been considered, from repurposing to pharmacodynamic aspects.
What moral can be drawn? That the correct answer cannot come from a wrong model

And then came AI.

AI was not born yesterday, but many of the methods that are most popular today are 50 years old. But now we have computers that allow calculations that only a few years ago were unthinkable. And the AI experts say: “Forget the hypothesis. They are no longer necessary. We give all the data to our IA and it will tell us what to do “. It sounds silly, but it’s not that silly. AI has the potential to abstract new models that no one has yet thought of.
With the new promises also comes new funding. And with fresh money comes new promises that can’t be kept. Because AI has weaknesses too:

  1. Garbage in, garbage out. If the data we use to train the network is not correct, the most we get is an overfit.
  2. No one can be sure that they have given the AI all and only the relevant data. If we provide data without value, the most we get is an overfit.

There are two aspects that represent our uniqueness. First, we have developed methods and algorithms that offer a more accurate understanding of biological systems. We have achieved a precision often comparable to experimental methods to clarify the mechanism of action of a molecule. And we use this data to train our AI together with experimental data. The second aspect is related to this experimental data. At SoftMining we use new robot-assisted synthesis technologies to produce homogeneous experimental data with high speed.

Our technology represents a unique integration of AI, CM, chemical synthesis and robotics.