Viewpoint

Privacy by the Numbers: A Deep Dive into the Structure of Privacy Policies

As part of our work for the Common Sense District Privacy Evaluation Initiative, we spend a lot of time reading through the text of privacy policies and terms of service of educational software. However, we are also looking at the mechanics of how these policies are articulated and delivered. Over time, as we evaluate more policies, we will be looking for possible patterns or correlations between technical and stylistic details, and the contents of policies.

To be clear, we do not think we will find any direct correlation between policy structures and whether terms are good or bad (although if we could see that predictably, that would make everyone's life a whole lot easier). However, even based on what we've seen so far — and we are in the early stages of this analytical work — we are seeing some potential indicators that will help us highlight specific elements of policies and analyze them more efficiently. To get a better sense of our process and how we're carrying out this work, I've outlined a few of the primary ways we are analyzing privacy policies here.

Reading Level
We run a high-level textual analysis of each policy and calculate reading levels using six openly documented formulas: Automated Readability index, Coleman–Liau index, Flesch–Kincaid grade level, Flesch reading ease, Gunning Fog index and SMOG. By using multiple measures, we can cross-reference reading levels across policies in cases where we see one method registering as an outlier. Doing an initial assessment of reading levels will also allow us to compare the audience of a specific application with the reading level for the terms. For example, it will be interesting to see how many applications designed for middle schoolers have terms at a middle school level compared to a college or graduate school reading level.

Structure and Accessibility
Next, we do some analysis of how the terms are structured. To do this, we grab a full copy of all the text used to render the page (i.e., the full source code of the page). Then, we strip it down to the actual text displayed on the page and calculate how much of the page is actual content. This calculation gives us a quick glimpse into how cleanly a page has been built, which in turn allows us to make some assumptions about potential usability or accessibility issues. We have seen some cases where over 97 percent of a page was markup and less than 3 percent of a page was actual content. Issues like this also provide an initial glimpse into potential technical issues within an application.

These calculations have also led us to see some other peculiarities in how privacy policies have been constructed. For example, we have seen privacy policies buried in markup that implies the privacy policy is an interactive form. We have also seen policies buried in the footers of web pages; to a user with a screen reader, that policy is somewhere between difficult and impossible to find. While these policies can potentially do a good job protecting content, when the basics of HTML get ignored, it raises questions about the attention to detail given to the actual product.

Word, Sentence, and Policy Length
Some other elements we look for in our structural analysis of policy texts include average sentence length, overall policy length, and the percentage of words with three or more syllables. These elements allow us to get a sense of how verbose or complex the language in a policy is. Over time, as we compare the structures of policies alongside the evaluations of policies, we will be able to observe and document patterns and trends and come up with concrete examples of (as one example) the shortest possible policy that is fully transparent. Part of our work this summer will include building data visualizations to help tell these stories.

We are early in the process of broad analysis of the structures of policies and of looking for patterns that are informative about policy structures and the content of these policies. But, issues such as policy length, reading level, and the technical implementation of the pages that render policies are all part of the picture. It's difficult to say what constitutes a "normal" policy without a baseline, and the work we will be launching this summer will help create a clearer picture — supported by openly available data — of what a typical policy looks like.

About the Author

Bill is Director of the Privacy Initiative for Common Sense. Prior to joining Common Sense, Bill started and ran FunnyMonkey, an open source development shop focused on education, open educational resources and peer-based learning. Prior to that, he worked as a classroom teacher for 16 years. At Common Sense, Bill directs the Privacy Initiative, a program designed to evaluate privacy policies and practices of vendors building educational technology.

Common Sense Education helps educators find the best edtech tools, learn best practices for teaching with tech, and equip students with the skills they need to use technology safely and responsibly. Go to Common Sense Education for free resources including full reviews of digital tools, ready-made lesson plans, videos, webinars, and more.


Whitepapers