Rules against the Machine: Building Bridges from Text to Metadata

José Calvo Tello (, University of Würzburg, Germany


Digital literary studies advance in their research, requiring more specific metadata about literary phenomena: narrator (Hoover 2004), characters (Kastorp et al. 2015), place and period, etcetera. This metadata can be used to explain results in tasks like authorship attribution or genre detection, or to evaluate digital methods (Calvo Tello 2017). What could be the most efficient way to start annotating this information in corpora of thousand of texts in languages, genres and historical periods for which many NLP tools are not trained for? In this proposal, the aim is to identify specific literary metadata about entire texts with methods that are either language-independent or easily adaptable for humanists.

Two Ways from Text to Metadata

The two approaches to classify unlabeled samples applied here are rule-based classification and supervised machine learning. In rule-based classification (Witten et al. 2011), domain experts define formalised rules that correctly classify the samples. For example a rule based on a single token can be defined for each class to predict whether a text is written in third person (83% of the corpus) or first person using tokens for the two values are the Spanish words
dije (‘I said’) and
dijo (‘he said’), and the rule:

  1. if
    dijo appears 90% more than
    dije, the novel is written in third person
  2. if
    dijo appears less, in first person

The results of applying this rule can be presented as a confusion matrix:

Fig 1. Confusion Matrix of rule-based results about narrator

For supervised methods (Müller and Guido 2016; VanderPlas 2016), we need labeled samples to train and evaluate the method. In the following table, the different classifiers and document-representations achieve different accuracy scores:


Fig 2. Accuracy (F1-score) for narrator

Corpus and Metadata

The data is part of the
Corpus of Spanish Novels of the Silver Age (1880-1939) (used in Calvo Tello et al. 2017), with 350 novels in XML-TEI by 58 authors. Each text has been annotated manually with metadata and its degree of certainty has been assigned. 262 texts with either high or medium certainty have been used to create a gold-standard with the following classes:

  1. protagonist.gender
  2. protagonist.age
  3. protagonist.socLevel
  4. setting.type
  5. setting.continent
  8. narrator
  9. representation
  10. time.period
  11. end

Modelisation and Methods

The scripts have been written in Python (available on GitHub)


. The features have been represented as different document models (Kestemont et al. 2016):

  • raw frequencies
  • relative frequencies
  • tf-idf
  • z-scores

Different classify algorithms (cross validation, 10 folds) and amount of Most Frequent Words have been evaluated. For each class a single token was used to represent each class value and a ratio was assigned for the default class value (see repository in GitHub for rules). Both approaches were compared to a “most populated class” baseline, quite high in many cases.


The results of both approaches are as following:

ClassF1 baselineF1 RuleF1 Cross MeanF1 Cross StdAlgorithmModelMFWWinner

Fig 3. Results

In many cases the baselines are higher than the results of both approaches. The rule outperformed the baseline in the case of name of the setting with very good results. In two cases (narrator and setting’s type), Machine Learning is the most successful approach and its F1 is statistically higher than the baseline (one sample t-test, ɑ = 5%). The algorithms Supported Vector Machines, Logistic Regression and Random Forest are most successful, while tf-idf and speacilly z-scores got the best results, the last one a data representation “highly uncommon in other applications” different from stylometry (Kestemont et al, 2016).


In this proposal I have used simple rules and simple features in order to detect relatively complex literary metadata in many cases with high baselines. While Machine Learning showed a statistically significant improvement in detection for two classes (type of setting and narrator), rules worked better for the name of the setting. This is a promising point to continue researching in order to annotate the rest of the corpus.

Appendix A

  1. Calvo Tello, J. (2017). What does Delta see inside the Author?: Evaluating Stylometric Clusters with Literary Metadata. III Congreso de La Sociedad Internacional Humanidades Digitales Hispánicas: Sociedades, Políticas, Saberes. Málaga: HDH, pp. 153–61 <
  2. Calvo Tello, J., Schlör, D., Henny-Krahmer, U. and Schöch, C. (2017). Neutralising the Authorial Signal in Delta by Penalization: Stylometric Clustering of Genre in Spanish Novels. Montréal: ADHO, pp. 181–83 <
  3. Hoover, D. L. (2004). Testing Burrows’s Delta. Literary and Linguistic Computing, 19(4): 453–75.
  4. Kastorp, F., Kestemont, M., Schöch, C. and Bosch, A. Van den (2015). The Love Equation: Computational Modeling of Romantic Relationships in French Classical Drama. Sixth International Workshop on Computational Models of Narrative. Atlanta, GA, USA. <>.
  5. Kestemont, M., Stover, J., Koppel, M., Karsdorp, F. and Daelemans, W. (2016). Authenticating the writings of Julius Caesar. Expert Systems with Applications, 63: 86–96 <>.
  6. Müller, A. C. and Guido, S. (2016). Introduction to Machine Learning with Python: A Guide for Data Scientist. Beijing: O’Reilly.
  7. VanderPlas, J. (2016). Python Data Science Handbook: Essential Tools for Working with Data. First edition. Beijing Boston Farnham: O’Reilly.
  8. Witten, I., Frank, E. and Hall, M. (2011). Data Mining: Practical Machine Learning Tools and Techniques. 3rd edition. San Francisco: Morgan Kaufmann.


Leave a Comment