Validating Readability and Complexity Metrics: A New Dataset of Before-and-After Laws

If algorithms are to be the policy analysts of the future, the policy metrics they produce will require careful validation. This paper introduces a new dataset that assists in the creation and validation of automated policy metrics. It presents a corpus of laws that have been redrafted to improve readability without changing content. The dataset has a number of use cases. First, it provides a benchmark of how expert legislative drafters render texts more readable. It thereby helps test whether off-the-shelf readability metrics such as Flesch-Kincaid pick up readability improvements in legal texts. It can also spur the development of new readability metrics tailored to the legal domain. Second, the dataset helps train policy metrics that can distinguish policy form from policy substance. A policy text can be complex because it is poorly drafted or because it deals with a complicated substance. Separating form and substance creates more reliable algorithmic descriptors of both.

Read more.

This paper is one of seven published as part of the Policy Analytics Symposium