UNSW Business School experts say businesses should develop ethical, socially responsible, trustworthy and sustainable data business models to protect consumers’ privacy in an AI-driven world.

Digital Privacy – Artistic Effects. Image credits: Pixabay, CC0 Public Domain via StockVault
The increasing use of Artificial Intelligence (AI) and new surveillance technologies has created a global alarm. In Australia, the number of big data breaches, particularly in the finance and healthcare sectors, has increased over the years. Mismanagement of information, opaque, excessive and widespread surveillance, and increased use of facial recognition and other biometrics are some of the latest developments causing concern.
As a result, Australians are concerned about protecting the privacy of their data and have become more skeptical about the activities of businesses that collect, handle and share disclosing information about their activities, interests and preferences. Huh.
With AI adoption and predicted to continue over the past few years, concerned consumers and regulators have forced businesses to make costly pursuits of their data systems, data handling processes and data project governance and assurance.
But many business models and data architectures were not designed for privacy and security by default, said Peter Leonard, professor of practice at the UNSW Business School in the School of Information Systems and Technology Management and the School of Management and Governance. said.
Pro. Leonard said that another reason businesses have struggled with privacy issues is that it has taken the Australian government more than two decades to start serious discussions about making the Australian Privacy Act fit for purpose in the 21st century.
How does AI affect your privacy?
The most difficult area to address in AI and data privacy is the issue of data profiling. For example, insurers can use AI for profiling to avoid taking on high-risk customers, while minimizing risk pooling, which enables premiums to be affordable on a broader base of insured individuals, Pro. Leonard said. Data privacy, consumer protection and insurance sector-specific laws do not address profiling-enabled targeting and thus ensure that consumers are treated equally.
“For example, many concerns about AI are about the use of profiling to differentiate the treatment of individuals, either alone (entirely ‘individual’) or as members of a segment who have common characteristics.” There may be illegal discrimination (intentional or accidental) against persons who have protected attributes (eg race, religion, gender orientation).
“Often, discrimination between individuals is not illegal, but can be perceived as unfair, unfair or simply unpredictable. AI enables targeted discrimination to be automated, cost-effective, and increasingly granular and valuable to businesses”, Prof. Leonard said.
“Then there are a plethora of other issues including nutrition, or at least not perishable, that a business deals with, and how you use data about individuals without realizing that you are ‘creepy. , or are otherwise inappropriate, in the data you are collecting.”
How can businesses protect users’ data?
Pro. Leonard said simply complying with current data privacy laws is no longer enough. Concerns about trust and privacy go hand in hand, but without adequate laws and guidance, businesses must fill the gaps.
“You have to start anticipating where the law might go and fill in the gaps in the law by thinking about what is a responsible way to act or what is an ethical way to act. It is difficult for businesses to do this and They will need to consult a wide range of stakeholders, including experts thinking about corporate social responsibility and ethics in the digital age.”
While it is clear that Australia’s privacy laws need to be reformed to address modern problems in a rapidly growing digital world, reform is complex and controversial, especially given that data privacy is inherently multifaceted and complex, Prof Leonard explained.
Pro. Leonard recently published a design manifesto for an Australian Privacy Act “fit for purpose in the 21st century”. The paper Data Privacy, Fairness and Privacy Harms in an Algorithm and AI Enabled World was one of 205 submissions filed with the Australian Attorney-General’s Department in response to the AGD discussion paper on the Reform of the Privacy Act.
Pro. Leonard put forward a number of recommendations regarding reform of data privacy law, focusing on the Australian Federal Privacy Act 1988 (C’th) (Australian Privacy Act) and proposals for reform of comparable state and territory data privacy and health information statutes Huh.
“Many of the issues are really issues around data governance, how you create it, how management decides how the data is used, and how you architect your data holdings so that you can make the right decisions, The right controls and safeguards are in place to do all this without creating business models that are going to blow up in your face,” said Prof. Leonard.
“Laws are important, but laws, in my experience, are typically less than a third of the issue I’m addressing when I’m advising businesses around advanced data analytics and AI.”
“A proper understanding of the limits of AI requires consideration of the interpretation and understanding of humans according to the limits of AI or algorithms.”
Going forward, he urged businesses to consider whether critical decisions could undermine users’ trust and whether their business models are sustainable in the long run, noting that the laws respond to the concerns of consumer advocates and citizens alike. about these new uses of the ever-changing pace.
“More often than not, the issue is not AI or the algorithm itself, but its over-dependence, its overuse, in many circumstances … where it was inappropriate to use it,” he said, citing the government’s robota debacle as a perfect example. Is.
“The issues to be addressed are not black and white issues, can’t I? Instead, they are much more complex issues of what a responsible organization should or should not do?” he said.
Human decisions are important in an AI-powered world
Rob Nichols, Associate Professor in Regulation and Governance at the UNSW Business School, agrees with the recommendations by Prof Leonard, saying that one of the fundamental ways humans can avoid potential issues with automation is to ensure that AI is used. When doing so, it is intended as a tool to aid in decision making, but not as a tool for decision making.
“One of the important things, especially in a government or regulator’s use of AI, is that it should be a decision support tool, but a decision is a human decision,” he said.
While Australia’s current privacy laws are inadequate to the problems facing businesses and consumers in the modern world, more laws are not necessarily the answer. Instead, it is fundamentally more important to consider how data use can adversely affect humans or be socially beneficial, a/prof. Nichols.
“Businesses should think about: What are you using it for, is it in support of a decision? Why is it important? Because it’s still the head of a CEO… the decision is made by one person. It’s a decision support tool, not pure automation. And I think it’s imperative to differentiate between the two. And that’s where the biggest risk comes in business is when you haven’t made that distinction,” he said.
“AI regulation needs a join-up policy,” a/Prof. Nichols. “We need to be able to address data security and privacy protection concurrently. It is a coherent policy approach to all of these issues. Being able to walk and chew gum at the same time is important and sadly Very absent.”
Source: UNSW