Schedule

The schedule is now final!

TIME IN UTC!

CLICK ON THE TIME TO FIND OUT WHEN THE EVENT TAKES PLACE IN YOUR TIME ZONE!

11:00-11:05 Welcome & opening remarks by the organizers

11:05-11:35 Opening: Protein representation learning: Beyond borrowing from Natural Language Processing and Computer Vision - Kevin Yang [Recorded]

Directly porting model architectures and tasks, such as word2vec, BERT, SimCLR, and variational autoencoders from natural language processing and computer vision has driven early progress in learning protein representations from unlabeled sequences. However, these models and tasks do not account for important differences between protein sequences and language or image datasets. For example, protein sequences are generated via evolution, sampling is biased towards proteins from humans and model organisms, and there is often side information, such as 3-dimensional structures. I will discuss some important advances in protein representation learning that account for or exploit these differences.

11:35-11:50 Single Layers of Attention Suffice to Predict Protein Contacts - Neil Thomas [Live]

The established approach to unsupervised protein contact prediction estimates coevolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment. Increasingly large Transformers are being pretrained on unlabeled, unaligned protein sequence databases but have demonstrated mixed results for downstream tasks, including contact prediction. We argue that attention is a principled model of protein interactions, grounded in real properties of protein family data. We introduce an energy-based attention layer, factored attention, and show that it achieves comparable performance to Potts models while sharing parameters both within and across families. We contrast factored attention with the Transformer to indicate that the Transformer leverages hierarchical signal in protein family databases not captured by our single-layer models. This raises the exciting possibility for the development of powerful structured models of protein family databases.

11:50-12:05 Lightning talks

11:50-11:55 - Evotuning protocols for Transformer-based variant effect prediction on multi-domain proteins - Hideki Yamaguchi [Live]

Accurate prediction of variant effects has broad impacts on protein engineering. Recent machine learning approaches toward this end are based on representation learning, often using large-scale, diverse datasets. However, it is still unclear how we can effectively learn the intrinsic evolutionary properties of an engineering target protein, specifically when the protein is composed of multiple domains. Additionally, no optimal protocols are established for incorporating such properties into Transformer-based variant effect predictors. In response, we propose evolutionary fine-tuning, or “evotuning”, protocols, considering various combinations of homology search, fine-tuning, and sequence embedding strategies, without the need for multiple sequence alignment. Exhaustive evaluations on diverse proteins indicate that the models obtained by our protocols achieve significantly better performances than previous methods. The visualizations of attention maps suggest that the structural information can be incorporated by evotuning without direct supervision, possibly leading to better prediction accuracy.

11:55 - 12:00 - ProteinBERT: A universal deep-learning model of protein sequence and function - Dan Ofer [Live]

Self-supervised deep language modeling has shown unprecedented success across natural language tasks, and has recently been repurposed to biological sequences. However, existing models and pretraining methods are designed and optimized for text analysis. We introduce ProteinBERT, a deep language model specifically designed for proteins. Our pretraining scheme consists of masked language modeling combined with a novel task of Gene Ontology (GO) annotation prediction. We introduce novel architectural elements that make the model highly efficient and flexible to very large sequence lengths. The architecture of ProteinBERT consists of both local and global representations, allowing end-to-end processing of these types of inputs and outputs. ProteinBERT obtains state-of-the-art performance on multiple benchmarks covering diverse protein properties (including protein structure, post translational modifications and biophysical attributes), despite using a far smaller model than competing deep-learning methods. Overall, ProteinBERT provides an efficient framework for rapidly training protein predictors, even with limited labeled data. Code and pretrained model weights are available at https://github.com/nadavbra/protein_bert

12:00 - 12:05 - Graph attention network based representation learning for cancer drug response prediction and interpretation - Dionizije Fa [Recorded]

We present a state of the art multimodal deep learning model for cancer drug response prediction based on pharmacogenomic data. We featurize cell lines as protein-protein interaction graphs. Graph attention networks then allow us to examine potentially plausible biological interactions in protein-protein interactions graphs by examining the attention coefficients.

12:05-12:20 Unsupervised language modeling at the scale of evolution- Alex Rives

Growth in the number of protein sequences in public databases has followed an exponential trend over decades, creating a deep view into the breadth and diversity of proteins across life. Modeling sequences at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. Our goal is to develop general purpose models that can read and write biology in its native language. I'll discuss our work to scale language models to evolution, and to understand what they learn about protein structure and function. I’ll discuss how their internal representations can be used to produce features for a variety of tasks, and the use of the models generatively. I’ll also talk about innovations in unsupervised modeling that take advantage of the structure of sequence space.

12:20-12:40 Lunch Break

12:40-12:55 Unraveling new biological insights from single-cell proteomic data using DEPICTION- Maria Rodriguez Martinez [Live]

The recent availability of large amounts of data generated by large international consortia has made possible the application of deep learning approaches to a vast set of problems in computational biology. However, high accuracy often comes at the price of loss of transparency, i.e. many of these models are built as black-boxes that fail to provide new biological insights. To overcome this challenge, I will present DEPICTION, a new open-source toolbox for interpretability in deep learning, which makes readily available some of the most commonly used methods for interpretability.


I will illustrate DEPICTION capabilities using a published massive single-cell dataset composed of 26 million cells from breast cancer biopsies. While each biopsy sample comes from a sample labelled as tumour or normal, we observe mixed phenotypes with both cancer and normal cells being present under both labels. This mixture makes it difficult to identify latent features that might correlate with each phenotype. I will show how using DEPICTION we are able to disentangle the directions in the latent space associated with cancer and healthy cellular states and train a surrogate model that achieves good performances while being transparent about the features used to make the classification. In conclusion, DEPICTION opens the door to a deeper analysis of highly complex biological datasets and to learning meaningful representations in the presence of mixed or noise phenotypes.

12:55-13:15 Lightning talks

12:55-13:00 - HydrAMP: a deep generative model for antimicrobial peptide discovery - Paulina Szymczak [Live]

The development of resistance to conventional antibiotics in pathogenic bacteria poses global health hazard. Antimicrobial peptides (AMPs) are an emerging group of compounds with the potential to become the new generation of antibiotics. Deep learning methods are widely used by wet-laboratory researchers to screen for the most promising candidates. We propose HydrAMP - a generative model based on a semi-supervised variational autoencoder, that can generate new AMPs, and perform analogue discovery. Novel features of our approach include: non-iterative training, parameter-regulated model creativity, and improvement of existing AMPs. We introduced multiple refinements to latent space modelling that allow us to sample novel AMPs despite the data scarcity. The peptides generated by HydrAMP are similar to the known AMPs in terms of physicochemical properties. We have successfully obtained and verified experimentally a new, more active analogue of Pexiganan, proving that HydrAMP is able to find potent analogues for existing peptides. The learnt representation enables fast and efficient discovery of peptides with desired biological activity.

13:50-13:05 - Random Walk-­based Matrix Factorization of a Multi­layer Network for Protein Function Prediction - Surabhi Jagtap [Recorded]

Cellular systems of organisms are composed of multiple interacting entities that control cellular processes at multiple levels by tightly regulated molecular networks. In recent years, the advent of high-throughput experimental methods has resulted in the increase of large-scale molecular and functional interaction networks such as gene co-expression, protein–protein interaction (PPI) , genetic interaction, and metabolic networks. These networks are rich source[s] of information that could be used to infer the functional annotations of genes or proteins. Extracting relevant biological information from their topologies essential in understanding the functioning of the cell and its building blocks (proteins). Therefore, it is necessary to obtain an informative representation of the proteins and their proximity that is not fully captured by features that are extracted directly from single input networks. Here, we propose BraneMF, a random walk-based matrix factorization of a multi-layer network for protein function prediction.

13:05-13:10 - Light Attention Predicts Protein Location from the Language of Life - Hannes Stärk [Live]

Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expertly chosen input features leveraging evolutionary information that is resource expensive to generate. We showcase using embeddings from protein language models for competitive localization predictions not relying on evolutionary information. Our lightweight deep neural network architecture uses a softmax weighted aggregation mechanism with linear complexity in sequence length referred to as light attention (LA). The method significantly outperformed the state-of-the-art for ten localization classes by about eight percentage points (Q10). The novel models are available as a web-service and as a stand-alone application at embed.protein.properties.

13:10-13:15 - Guided Generative Protein Design using Regularized Transformers - Egbert Castro [Live]

The development of powerful natural language models have increased the ability to learn meaningful representations of protein sequences. In addition, advances in high-throughput mutagenesis, directed evolution, and next-generation sequencing have allowed for the accumulation of large amounts of labelled fitness data. Leveraging these two trends, we introduce Regularized Latent Space Optimization (ReLSO), a deep transformer-based autoencoder which is trained to jointly generate sequences as well as predict fitness. Using ReLSO, we explicitly model the underlying sequence-function landscape of large labeled datasets and optimize within latent space using gradient-based methods. Through regularized prediction heads, ReLSO introduces a powerful protein sequence encoder and novel approach for efficient fitness landscape traversal.

13:15-13:30 Meet our sponsors

13:30-13:45 ChemBERTa: Self-supervised pretraining for molecular property prediction - Bharath Ramsundar [Recorded]

The design of a robust transfer learning method for molecules has been a longstanding challenge. In this work, we explore the use of NLP-style pretraining for learning a "chemical language" model on a large corpus of SMILES strings. Our results suggest that it is possible to learn meaningful chemical context in an unsupervised fashion and pair with recent results from others on language modeling for DNA, further suggesting that NLP methods provide a robust basis for building understanding of biomolecules.

13:45-14:00 Short DNA sequence embeddings uncover metagenome function - Yana Bromberg

Microbes dominate life on Earth. Understanding the environment-specific microbial molecular functionality is, therefore, a critical challenge for the analysis of microbiome behavior and response to stimuli such as dietary or climate changes. We developed a model of the bacterial language of life, which allows embedding metagenomic read data for multiple downstream tasks and analyses. Embedding distances, for example, correspond to the differences in environmental niches from which metagenomes were sampled. Furthermore, models using read embeddings can annotate metagenome molecular functionality, highlighting genes that likely carry out known functions via novel sequence. Our LookingGlass language model is thus a promising starting point for more in-depth exploration of the prokaryotic living space.

14:00-14:20 Coffee break

14:20-14:35 Decoding language of life written in protein sequences - Burkhard Rost [Live]

Over the last two years, it has become possible to deep learn the language of life written in proteins mimicking the tools developed to understand natural language (NLP), most importantly through transformers. The information extracted by such protein language models (pLMs), referred to as embeddings, is transferred to serve as input for the supervised learning protein prediction from experimental annotation. For the prediction of protein secondary structure in 1D, inter-residue distances in 2D and structure in 3D, as well as, for sub-cellular location, such methods now at least reach the top methods without using any evolutionary information from multiple sequence alignments (MSAs) thereby substantially reducing the cost for every future prediction.

14:35-14:45 Lightning talks

14:35-14:40 - Efficient Design of Optimized AAV Capsids using Multi-property Machine Learning Models Trained across Cells, Organs and Species - Farhan Damani [Live]

While next-gen high-throughput assays enable us to learn how capsid sequence changes affect capsid functionality, measuring and optimizing capsid properties in the most therapeutically relevant models, such as non-human primates (NHP), remains challenging. The rate of transduction in target organs is lower than ideal, and most of the sequence space is non-functional. To overcome these challenges, we investigated to what extent multi-task machine learning can improve the efficiency of AAV capsid design for high-performing capsids. We apply our method to a previously designed library containing 156,858 designed sequence variants derived from a natural AAV capsid serotype and measured their properties as delivery vectors. MPMs provide a coherent framework in which to connect information from experiments across cell lines, organs, and species to the most relevant outcomes in NHP studies, thereby reducing the high resource and ethical burdens of NHP experimentation. Additionally, MPMs help overcome data sparsity in traits that are hard to measure, thereby improving model accuracy and providing a more reliable interpretation of experimental results. With further refinement, MPMs will enable the design of highly optimized AAV capsids that open new frontiers in delivery, toward realizing the full potential of gene therapy.

14:40-14:45 - Multimodal data visualization and denoising with integrated diffusion - Manik Kuchroo [Live]

We propose a method called integrated diffusion for combining multimodal data, gathered via different sensors on the same system, to create a integrated data diffusion operator. As real world data suffers from both local and global noise, we introduce mechanisms to optimally calculate a diffusion operator that reflects the combined information in data by maintaining low frequency eigenvectors of each modality both globally and locally. We show the utility of this integrated operator in denoising and visualizing multimodal toy data as well as multi-omic data generated from blood cells, measuring both gene expression and chromatin accessibility. Our approach better visualizes the geometry of the integrated data and captures known cross-modality associations. More generally, integrated diffusion is broadly applicable to multimodal datasets generated by noisy sensors collected in a variety of fields.

14:45:15:15 Closing: Geometric and Topological Approaches to Representation Learning in Biomedical Data - Smita Krishnaswamy [Live]

In this talk I will show how data diffusion geometry, topology and deep learning can be combined to obtain useful representations and abstractions of high dimensional data that lead to insights about the underlying systems. Next we will show how to learn dynamics from static snapshot data by using a manifold-regularized neural ODE-based optimal transport (TrajectoryNet) in order to study cancer progression. Finally, we cover a novel approach to combine diffusion geometry with topology to extract multi-granular features from the data (Diffusion Condensation and Multiscale PHATE) to assist in differential and predictive analysis.

15:15-15:20 Closing remarks by the organizers

Asynchronous Poster Session

Outside of program

In order to accomodate speakers and attendees in different tiemzones, we propose two optional sessions for speakers and attendees to come together and have a conversation.

We will meet on Zoom!

https://tum-conf.zoom.us/j/68175242928 (passcode: 4444)

1. Cumulative QA session

16:00 / Click here to see it in your timezone!

These speakers will be present: Smita, Maria, Kevin, Hannes, Paulina, Surabhi, Dionizije, Burkhard, Farhan, Dan, Maria, Alex, Egbert, Yana

2. Cumulative QA session

21:00 / Click here to see it in your timezone!

These speakers will be present: Smita, Maria, Kevin, Bharath, Neil, Dionizije, Burkhard, Farhan, Maria, Manik, Alex, Egbert