What are generative AI systems really doing with our data?

UQ data expert worries that we are not doing enough to manage the risk of AI

An aerial view of a city intersection

A future transportation concept, where many cars and GPS navigation systems are trackable. Image: Adobe Stock/DedMityay

A future transportation concept, where many cars and GPS navigation systems are trackable. Image: Adobe Stock/DedMityay

Each month, Research News uses the latest findings to help explain the issues facing your community.

Generative artificial intelligence (AI) technology is developing so quickly that it’s impossible to know what it will turn into next, a leading professor has forecast amid warnings that developing and applying policy ‘guardrails’ will be critical to prevent rampant unethical misuse of the powerful technology.

Just what those guardrails will look like is still far from clear, but growing consensus among scientists, researchers, ethicists and others is that they must be prioritised as AI rapidly permeates every aspect of IT and, by extension, becomes enmeshed with the global business ecosystem.

"You can’t ban it because (end users) are still going to use it," explains Shazia Sadiq, a professor of computer science within The University of Queensland’s School of Information Technology and Electrical Engineering.

"You just have to put in guardrails and make them reasonable, and have some consequences attached to that.

"But this is all very fresh and new, and that regulatory framework is still largely missing."

Professor Shazia Sadiq

Professor Shazia Sadiq

Professor Shazia Sadiq

Developing such a framework has been challenging since ChatGPT’s release enervated an AI community that is releasing dozens of new tools weekly – many, like ChatGPT, powered by OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs), and others being trained on alternative LLMs or entirely different platforms.

The sheer diversity of contemporary AI solutions makes it hard to either regulate them or even know what new capabilities they will develop. But as tech giants like Microsoft, Google, Meta and Amazon elbow into the AI space, their battle for market dominance is creating its own challenges.

"When there are commercial imperatives, there is the possibility of taking shortcuts just to get a bit of commercial edge," Sadiq warns.

Commercial pressures will likely perpetuate the technology as a ‘black box’ solution with little accountability or even an obligation of accuracy – leaving governments, businesses and individuals unsure whether they can trust the systems or what AIs are doing with their data.

Learning the downsides of AI

As Chair of the Australian Academy of Science’s National Committee for Information and Communication Sciences, and Director of the ARC Industry Transformation Training Centre for Information Resilience, Sadiq is exploring the relationship between the data that is fed to generative AI systems and the output they produce.

One risk of AI models is that the data they are fed is retained and used to further train them. In at least one case, Sadiq says, her team had to shut down the platform for a day when data from one user appeared in the responses it fed to another.

A concept of facial detection and recognition on a city street.

A concept of facial detection and recognition on a city street. Image: Adobe Stock/DedMityay

A concept of facial detection and recognition on a city street. Image: Adobe Stock/DedMityay

Such issues have raised privacy concerns big enough that the Italian Government banned ChatGPT out of concerns that it violated the European Union’s General Data Protection Regulation.

Yet, privacy is only one of the numerous concerns experts are raising, as expanding use of generative AI platforms illuminates strengths and weaknesses.

Another, more problematic issue is generative AI’s well-documented tendency to play loosely with facts – often getting facts wrong, making inference errors or simply inventing things that never happened.

Understanding and fixing this ‘hallucination’ phenomenon is a priority for AI researchers – and a thorn in the side of those hoping that cooler heads can prevail around AI’s adoption.

Frequent AI hallucination "is not a good thing", Sadiq explains.

"The language models are creating information that gives an impression of greatness because it writes very well. But we know that some of the writing is completely false, with citations that don’t exist, facts that are completely wrong, and calculations that make no sense," she said.

After a decade where social media channels amplified half-truths and falsehoods, the potential for similar flaws in a new technology as critical as AI is "particularly concerning", Sadiq says.

"We have an understanding of risks from the point of view of public harm, data leakage and data privacy – but what is difficult to predict is how these technologies will impact society."

Most users don’t question something that seems authoritative – and in a click-obsessed tech ecosystem, there is no guarantee that tech giants can make generative AI as accurate and trustworthy as many users already assume it is.

"There is no doubt that these technologies present an unprecedented opportunity," Sadiq says, noting that AI’s impact on applications such as health and education "is potentially game changing – and one of the biggest risks is that we fail to avail ourselves of this opportunity".

The national AI imperative

Use of generative AI technology isn’t the only issue facing Australian organisations as the technology matures.

AI’s effective has become a fundamental national capability – and to continue advancing the state of the art, Sadiq argues that government, business and academia must work together on new applications and build a base of Australian AI experts that are capable of having these discussions.

The United States and China are dominating the battle for AI’s development. Yet, despite this heavily biased ecosystem, Australia needs to avoid allowing itself to become little more than an end user of other countries’ technologies.

"Rather than always importing these technologies, we need to build global equity in those research ecosystems and Australian innovation ecosystems," Sadiq says.

That means growing and, at the very least, sustaining a domestic base of AI skills, and investing in an Australian cohort of AI experts that can continue to drive discovery, basic science, and fundamental research.

"One of the things that is really important is that we don’t get polarised," Sadiq says.

"As soon as you get polarisation into the public discourse, it consumes all discussions, so we can’t have an informed discussion anymore.

"It is impossible to forecast opportunities of generative AI over the next decade," she explains.

"We just need to embrace this massive opportunity, but also be mindful of the risks and the concerns – both for business, (and) also for the government and for individuals."

This article was published as partner content in SecureGOV magazine.