Artificial intelligence has become the cowboy of the tech world, creating a Wild West for countries to regulate and manage.
According to the Center for Strategic and International Studies, “AI has direct implications for national security, military capabilities, and global economic competitiveness”.
Chief Security Officer at Adisyn, Thomas Jreige, said AI has created sophisticated cyber threats “necessitating advanced cybersecurity measures”.
Artificial intelligence is an engineered system that can generate predictive outputs such as content and recommendations. Many different AI systems are designed to operate with differing levels of automation.
Machine learning comes into play within AI. This teaches a machine to make accurate predictions from a historical data set.
This subsect of AI is used in many different circumstances. These include object and speech recognition, trend analytics, and predictive analytics.
Personr uses machine learning to detect artificially generated images or documents in the identity verification process. These technologies look for and detect subtle inconsistencies that are missed by the human eye. Through machine learning, Personr can detect irregularities in image depth, artifacts, eye reflections, skin texture, and blood flow.
Compared to machine learning, generative AI creates new data that resembles training data. This can be an image or text based on what it has learned previously.
Some believe the risk of AI is relatively high, with the Center for AI Safety stating, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
Many ethical considerations exist when using AI, such as whether people must disclose if they have used artificial intelligence to create their work.
Disclosure can be essential for three reasons: trust between the creator and the consumer, getting ahead of the law and remaining compliant with legislation, and not infringing on copyrighted material.
The regulation responses to AI have differed significantly across the globe. Some countries and regions have approached the new technology with a top-down strategy, while others have focused on a bottom-up approach.
Towards the end of 2023, the European Parliament reached a provisional agreement on the AI Act. This top-down approach includes prohibiting some uses of AI that pose a high-risk level.
Banned applications include:
China was the first country to create a law that targeted generative AI. This includes restricting companies from providing services using training data and the outputs produced. China's new laws impact Chinese technological exports and global AI research networks.
China’s new rules also require algorithms to be reviewed by the state in advance to make sure that they follow the core socialist values.
Australia has yet to pass legislation; however, current provisions are in play. Australia focuses on safety by design principle, which covers service provider responsibility, user empowerment and autonomy, and transparency and accountability.
Given the geopolitical discourse around AI, Australia is expected to release a proposed legislation soon.
Personr hopes to see the following regulations introduced:
It is not just governments who have concerns about the unregulated Wild West; Google CEO Sundar Pichai said to Business Insider, “AI is too important not to regulate- and too important not to regulate well”.
Dr Jreige said it is important to have “international cooperation and consistent regulatory frameworks to address the global nature of cybersecurity threats in the AI era”.
Disclaimer: This is for general information only. The information presented does not constitute legal advice. Personr accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.
Copyright © 2023 Personr Pty Ltd (trading as Personr).