Australian regulators are considering taking a strict stance on ensuring younger users cannot access AI chatbots, with a potential deadline of March 9 for app storefronts to block AI services that do not implement age verification. The regulators may require age verification to restrict mature content, and non-compliance could result in action being taken against gatekeeper services such as search engines and app stores. A representative for the commissioner stated that they will use their full range of powers to address non-compliance, which could include fines of up to A$49.5 million. A review found that only nine out of 50 leading text-based AI chat services in the region had introduced or shared plans for age assurance. Eleven services had blanket content filters or planned to block all Australians from using their service, leaving a large number that had not taken public action. The question of who is responsible for keeping children from accessing potentially harmful content is being debated globally. In the US, Apple and Google are lobbying to have the task delegated to platforms rather than app store operators. The Australian regulators' language suggests an aggressive stance, which aligns with the country's priorities, as seen in last year's ban on the use of social media and some digital platforms for citizens under age 16. The potential ban on AI services that do not implement age verification is part of a broader effort to protect younger users from harmful content. The Australian government's approach to regulating AI chatbots and ensuring age verification may set a precedent for other countries to follow, as they grapple with similar issues of online safety and responsibility.
engadget.com
engadget.com
Create attached notes ...
