Managing Social Media Risks

Name an online social networking site, and there are liable to be thousands of teachers, administrators, and students using it connect with people. Whether it's Facebook, MySpace, Twitter, or one of the more "specialized" online venues, all are replete with individuals looking to tap into the growing social networking wave.

Like any new, uncharted innovation, online social networking comes with risks not associated with many "traditional" ways of connecting with people. Unintentionally offend someone in person at a bookstore, for example, and the repercussions are likely to be minimal. But post a photo that others deem "offensive" on your Facebook page, and you could risk alienating others and even setting yourself up for potential lawsuits.

In her recent report, "Risk Management and Social Media: A Paradigm Shift," Maureen O'Neil, president of the International Development Research Centre (IDRC), called social media tools like blogs, message boards, and social communities the "fastest growing segment" of Web content. "These forms of social networking upend the traditional form of top-down information dispersal because information freely flows in and out of an organization," said O'Neil.

The problem is that social media can expose organizations to significant risk, not the least of which is serious reputation damage, said O'Neil. That's because social media is still largely the "Wild Wild West" of the Internet: It's widely used, yet there are technically no set rules attached to it in terms of conduct. The good news is that institutions can take an active approach to influence and counteract their schools, students and teachers that are portrayed on these social media sites.

"That requires businesses to create an Internet reputation risk management plan that addresses what visitors to your site express, what your employees share on other sites and most significantly what things are said about your organization on sites over which you have no direct control," said O'Neil. She suggested organizations actively engage on social network venues to understand how reputation can be impacted by the interactions, and then gather information on the social media activities under consideration.

From there, assess the areas of vulnerability, create counteraction plans, and communicate them to employees. Dedicate at least one employee to the monitoring of your online reputation, remarked O'Neil, and build a process to identify new reputation risk elements as social media evolves.

"The risks organizations face as a result of participating in social media are real, but so too are the benefits," she said. "Don't let risk blind you from taking advantage of the transformational communication opportunities that arise from social media."

For schools, the need for risk management is especially high because teachers, students, and administrators alike are enjoying the benefits of connecting with one another online. Whether administrators are posting information about a recent school event, teachers are bouncing ideas off of one another, or students are posting photos of their weekend events, all of the information being shared is available for anyone to see and comment on.

The single biggest risk in social media circles is undoubtedly the participant's utter lack of control over where the information is going, how it will be posted, and who is going to be able to access it. To avoid potential problems in this area, pay particular attention to what pages that online information is linked to, what types of pages are attached to the information, and which photos are included.

Schools looking to beef up their social media risk management strategies can start by setting up guidelines around their employees' and students' use of sites like Facebook, MySpace, and Twitter, to name just a few. Stress the fact that, once posted online, comments and photos "never go away," even if the individual poster deletes them.

Sarah Evans, an Internet marketing consultant and director of communications for Elgin Community College in Elgin, IL, said schools should pay particular attention to the feedback being posted about the institution and its students and teachers. Assign someone to "search" the various sites (for the school's name, for example) on a regular basis to essentially "police" the institution's brand and make sure it's being represented properly in the social media.

"You want to make sure that you're portraying the same experience online that you do when people enter your institution's doors," said Evans, who pointed out that all social media sites incorporate a "search" function that allows users to type in keywords and "see what people are talking about in real-time, online."

Also check out exactly what the content looks like before exposing it to the rest of the world. (If one of your teachers has his or her own Facebook page, pull it up online and see what it looks like to others.) Pay attention not only to the teacher's or student's own comments and postings, but also to the feedback being posted by "friends" who are reading--and commenting on--those social networking activities.

Keep an eye out for information that could be construed as defamatory or that could evoke offensive or overly negative comments (via a blog, for example) from the people who are participating in the online social circles.

Also pay attention to copyright and intellectual property right infringement--two lines that are fairly easy to cross on the Internet, where content appears to be "free" for all. The reproduction of an article, document, or photo, for example, can easily trigger a claim of copyright infringement.

By taking an active approach to risk management, schools will be better prepared to deal with such issues, if and when they come up. Unfortunately, many organizations and institutions prefer to ignore the problem. Eddie Schwartz, chief security officer at Internet security monitoring firm NetWitness in Herndon, VA, said institutions that choose to turn a blind eye to the social media sector are doing themselves a disservice that could, at some point in the future, turn into a much larger problem.

"Schools don't necessarily have to use a 'Big Brother' approach," said Schwartz, "but they must develop guidelines for using these sites, monitor how they're used, and figure out what to do when the lines are crossed."

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.