eChalk, Thinkronize Partner for Online Learning

##AUTHORSPLIT##<--->

At the Texas Computer Education Association (TCEA) conference in Austin this week, education technology developer Thrinkronize announced several new initiatives, including an education partnership with eChalk and integration with Lightspeed Systems' Total Traffic Control security system.

The partnership with eChalk will "offer schools nationwide a safe and secure online learning environment with standards-based digital content," according to the two companies. eChalk is the developer of the Online Learning Environment, which provides a Web-based suite of communications, learning, management, administrative, and collaboration tools. The new partnership with Thinkronize, which grew out of the companies' relationship through Texas' Technology Immersion Project, users of eChalk's Online Learning Environment will be able to access Thinkronize's netTrekker d.i. education-focused search engine directly using a single login.

Thrinkronize also announced a new integration deal with Lightspeed Systems for its Total Traffic Control version 7.0. You can read a separate article about the deal with Lightspeed by clicking here.

Finally, the company also announced this week that it's passed a new milestone: More than 11 million students have now used the service. That's an increase of 1 million students since fall 2007. The safe search engine is also being used by 600,000 teachers and 20,000 schools in the United States.

Get daily news from THE Journal's RSS News Feed


About the author: David Nagel is the executive editor for 1105 Media's online education technology publications, including THE Journal and Campus Technology. He can be reached at [email protected].

Proposals for articles and tips for news stories, as well as questions and comments about this publication, should be submitted to David Nagel, executive editor, at [email protected].

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • Report Explores Teacher and Administrator Attitudes on K–12 AI Adoption

    K–12 administration software provider Frontline Education recently released a new research brief regarding the use of AI adoption in schools, according to a news release. “Insights into K–12 AI Adoption: Educator Perspectives and Pathways Forward” was developed from the results of the Frontline Research and Learning Institute’s annual survey of district leaders.

  • PowerBuddy for Data

    PowerSchool Releases AI-Powered Tools for Students, Admins

    PowerSchool recently announced the general availability of two new AI-powered education tools, one for students and one for education data managers.

  • abstract pattern of interlocking circuits, hexagons, and neural network shapes

    Anthropic Offers Cautious Support for New California AI Regulation Legislation

    Anthropic has announced its support for an amended version of the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," California’s Senate Bill 1047 (SB 1047), because of revisions to the bill the company helped to influence — but not without some reservations.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs Off on AI Content Safeguard Laws

    California Governor Gavin Newsom has officially signed a series of landmark artificial intelligence bills into law, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.