When it comes to the identification and discovery of new therapeutic modalities, it is always easier said than done. The field of drug discovery, which largely exists outside of academia in the industry sector, while useful and full of potential – has multiple considerations to be addressed. So when you think of drug discovery the most common thing that pops into mind is the screening of large compound libraries, we are talking multiple thousands of compounds towards a certain purpose. Toward a specific biological target identified to be relevant to a certain disease state, toward a specific outcome or phenotype such as cell death/viability, global patterns of change across the genome, and the list goes on. Out of these massive screens we hope to identify compounds that can be developed as novel therapeutics and moved into clinic. The compounds screened range from natural products such as the National Cancer Institute’s natural compound diversity set – these are the more holistic molecules, think turmeric – which if identified to be effected can then be chemically modified, to purely synthetic compounds with multiple analogs to see if one might have an effect, to libraries composed of drugs already in use in the clinic that could potentially be repurposed for a different “off-label” disease.
This sounds great, with so many different compounds to screen we should be coming out with new therapeutics all the time; and we are for the most part, but they frequently fail in pre-clinical work up. can be for a wide variety of reasons, but it starts with the screening process itself. Each researcher conducting screening to address a certain question develops assay platforms that they think best suited to answer that question, but each assay may be different and the variables to be considered are many, making it difficult to account for them all. There is even an agency called NCATS or the National Center for Advancing Translational Sciences that hosts annual meetings just to discuss these variables and how to better optimize our screening process.
For starters, say you are screening small cell lung cancer cells, different parameters may be needed for different cell lines even if they are the cell type due to differences in cell size, shape, adherence, and so on. Then, prior to plating these cells into assay format to be screened, you have to take into consideration cell confluency, or how many cells are present in their culture plate, how close they are together. This is because cell-to-cell contact and cell cross-talk changes depending on how many neighboring cells are present and changes biological functions and expression patterns. Studies have shown that cells plated for screening assays at 90% or 100% confuency to not respond the same drug in the same way they do when plated at 50% confluency, for example. The cells that were plated at 50% confluency were found to respond, while the completely confluent cells were less responsive. Next, how many cells should be plated?? Should they be plated to confluency?? How many cells are needed to get enough yield for analysis? Should they be plates in 2-dimensional culture, or 3-dimensional as speroids to better recapitulate in vivo tumors? These are all factors that make a difference in the outcome of drug screening.
Researchers also need to decide what concentration of compound to screen at. Physiologically relevant concentrations, or those that will more likely be tolerated in the human body, should be sub-micromolar to avoid off-target and toxic effects. Most screens are done at micromolar concentrations though, with the thought that if an effect is seen the compound can then be optimized for sub-micromolar use. But!! Different compounds might be effective at different concentrations, so this is one limitation that is hard to overcome in high throughput (thousands of compounds plus) screening efforts. If you pick a concentration, maybe even a few and go with it and you will get “hit” compounds, but there is always room to gloss over compounds that may have been effective or held promise had you screened the right concentration for them. This is a difficult issue to address, as doing something like IC50s for all compounds for example would be highly time consuming and reagent consuming, and just not practical. Similarly, duration of treatment for the screen is a factor that could allow you to identify some hits, but miss other would-be hits. If researchers choose to do their screen at say 24 hours and 120 hours (five days) they might see compounds that work at those time points but miss compounds that are effective as quickly as 15 minutes or that maybe take longer – say a week or more. Finally, how the researchers opt to analyze the resulting data and what the statistical cutoffs and analyses used are can make a difference in which compounds or drug candidates get moved forward for further study.
For these reasons, while screening modalities are able to identify promising molecules, many are likely over looked and may never be recognized. Collaborative groups of interdisciplinary scientists are working on overcoming this still, especially as technologies and our capabilities continue to expand. See below for video on high throughput screening for drug discovery.
Sources: Natonal Center for Advancing Translational Sciences, Pixabay, Youtube, Retisoft Inc.