Meta-learning for real-world class incremental learning: a transformer-based approach
Abstract Modern natural language processing (NLP) state-of-the-art (SoTA) deep learning (DL) models have hundreds of millions of Cake/Pie Server parameters, making them extremely complex.Large datasets are required for training these models, and while pretraining has reduced this requirement, human-labelled datasets are still necessary for fine-tun