Enter a sentence to predict its cognitive state. This demo uses a fine-tuned BERT model for semantic embeddings combined with 19 discrete linguistic features, fed into a Multi-Layer Perceptron (MLP) to classify text as either 'Normal Reading (NR)' or 'Task-Specific Reading (TSR)' based on the ZuCo dataset.