OpenAI has launched o1, a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.
Cybersecurity remains the top ed tech priority for state education leaders, according to the 2024 State EdTech Trends report from the State Educational Technology Directors Association.
The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.
California lawmakers have approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.
A 6th grade ELA teacher offers best practices based on his experience using AI tools and features in the classroom.
The United States, United Kingdom, European Union, and several other countries have signed "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law," the world's first legally binding treaty aimed at regulating the use of artificial intelligence (AI).
The Iowa Department of Education has invested $3 million to make EPS Learning's AI-powered Reading Assistant tool available to public and nonpublic elementary schools
The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), is partnering with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.
OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.
Anthropic has announced its support for an amended version of the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," California’s Senate Bill 1047 (SB 1047), because of revisions to the bill the company helped to influence — but not without some reservations.