New Site Grades Android Apps on Privacy

Google Maps gets an A. The free version of Angry Birds gets a C. And My ABCs by BabyBus gets a D. The letters assigned to each of these Android apps are grades, and while A is great, D means failure — in privacy, that is.

Those grades and a million others were assigned through a scanning application that combines automated techniques with crowdsourcing to capture the behavior of an app and measure the gap that exists between how people expect the app to behave and how it actually behaves. For example, while people expect apps such as Google Maps to use location data from the smartphone, there's little reason for a game like Angry Birds or an educational app such as My ABCs to read phone status and location and gain network access other than to identify users for market and customer analysis and deliver targeted advertising.

That's why a research team at Carnegie Mellon University has launched PrivacyGrade, a Web site that shares privacy summaries that highlight the most unexpected behaviors of an app. The goal is to help smartphone users manage their privacy better and with more thought.

"These apps access information about a user that can be highly sensitive, such as location, contact lists and call logs, yet it often is difficult for the average user to understand how that information is being used or who it might be shared with," said Jason Hong, associate professor in the Human-Computer Interaction Institute, and primary investigator for the project in the Computer Human Interaction: Mobility Privacy Security (CHIMPS) Lab. "Our privacy model measures the gap between people's expectations of an app's behavior and the app's actual behavior.

PrivacyGrade also examines which third-party code libraries make use of the resources culled by the app. If the app accesses location data, the program checks to see if it's used by a library such as Google Maps, suggesting it is simply being used for mapping, or if it is being used by an advertising library, an indication that it will be used for targeted ads.

The application doesn't currently include paid apps, since the presumption is that because the developers receive income from sales, they're less likely to sell user data to other companies. Eventually, the CHIMPS team may add additional apps to the site, for iOS, Windows Mobile and Blackberry, if funding permits.

The work was funded through a National Science Foundation grant, as well as the Army Research Office, NQ Mobile and Google through its faculty award program.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • hand touching glowing connected dots

    Registration Now Open for Tech Tactics in Education: Thriving in the Age of AI

    Tech Tactics in Education has officially opened registration for its May 7 virtual conference on "Thriving in the Age of AI." The annual event, brought to you by the producers of Campus Technology and THE Journal, offers hands-on learning and interactive discussions on the most critical technology issues and practices across K–12 and higher education.

  • teenager interacts with a chatbot on a computer screen

    Character.AI Rolls Out New Parental Insights Feature Amid Safety Concerns

    Chatbot platform Character.AI has introduced a new Parental Insights feature aimed at giving parents a window into their children's activity on the platform. The feature allows users under 18 to share a weekly report of their chatbot interactions directly with a parent's e-mail address.

  • laptop screen displaying a typed essay, on a child

    McGraw Hill Acquires Essaypop Digital Learning Tool

    Education company McGraw Hill has announced the acquisition of Essaypop, a cloud-based writing tool that will enhance the former's portfolio of personalized learning capabilities.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study Finds Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.