The internet may have been a creature of the US military-industrial complex and, later, a global network of computer engineers working on it for LOLs, but there is also something quintessentially Australian in the way it has taken ownership of us. The pantheon of tech titans that now rule the world subscribe to the same creed of rugged exceptionalism our forebears brought to these shores. They colonised our lives as the Europeans invaded this ancient continent: a new and foreign power brazenly staking its claim on our reality as if it had never been occupied, as though what was ours already did not warrant further consideration.
This is not to draw equivalence with Indigenous dispossession but there is an arrogance in this erasure that echoes our own dark past. They established their domain, their values, their version of reality, oblivious to what was already there. In exchange they offered up trinkets, mirrors and whistles, rewarding us for our acquiescence, our submission, with the promise of a more advanced way of being. But they didn’t tell us about the disease and addiction that would follow, the erosion of the world of which we had been custodians, the desecration and destruction of our laws and our systems of government and our culture, our very way of being. None of this was spoken of because none of these were ever considered. It was not even ignored, because in their eyes there was nothing there to replace. Our reality was simply an empty land, terra nullius.
• • •
The promise of connection and the destruction of the old industrial hierarchies were presented as self-evident goods and we all embraced them with gusto. The times were right: capitalism had triumphed after a 75-year struggle with systems of state control, whirling around the globe on a victory lap that began with the collapse of the Berlin Wall and culminated in the launch of the World Wide Web Consortium five short years later. Created through a radically collaborative network of hackers who subscribed to an ethos of open sourced, network connectivity was embraced as a self-evident good where the only role of government would be to bridge the digital divide and then get out of the way. In its place, new networks of human ingenuity operating on a global platform of liberty would create a brave new world that would refuse to genuflect to the old institutions of power. It was, after all, a revolution.
Three decades on, the network has become our lives, our connections dominating our attention while extracting our essence as they capture and log our every online action. We are hurtling through history with an intensity powered by Moore’s law, the calculation that digital processing will double every two years. This exponential growth of capacity has been so successfully integrated into our lives that these new norms seem unremarkable. We flick through our handheld devices seeking affirmation and connection, looking for the next sugar-hit of an impulse. Like a tweeker on a big night out, the high creates its own hunger for something more. To sate our desire we stare into our screens longer, looking for more stimulation, more validation, too engrossed to take stock of where all this connection has taken us.
The questioning of this virtual reality has been a slow burn. While the elevation of Trump was a feature, not a bug, of shallow anger-fuelled hyper-connectivity, it gave pause to question whether the web was a self-evidently transformative tool. The failure of our polity to confront the existential challenge of climate change made us wonder whether the free flow of information led to deeper insight. The destruction of the traditional media and the decline of trust in the old institutions of power left a gaping hole in the pubic square that privately-owned platforms purport to fill without committing to the maintenance of the public interest or even its own basic hygiene.
Meanwhile, the commercial development of algorithmic decision-making that learns as it goes leaves many facing economic redundancy, while the development of the technology in more authoritarian states speaks to more basic dehumanising threats. And every day we struggle to maintain focus as global industries commit themselves to keeping us connected and distracted for longer and longer, hiding their business models behind the zeitgeist that they so lovingly engineer.
The companies that own the future, such as Facebook and Google, talk of disruption and self-regulation as though they are established values. But their success and its consequences prompt a more fundamental question: can this new civilisation, if that’s what we can honestly call it, be channelled and harnessed to our collective benefit? Or will it simply roll over us, on its own self-serving terms, creating a future to fit its purpose?
After three decades of freedom it is starting to dawn on more of us that the world the web has created needs to be controlled, or at least harnessed, by the people it affects. It’s a novel idea, really: that the law of the land should apply. But imagine if a nation built on illegal dispossession, a dumping ground for petty criminals and political dissidents, could take a lead in shaping that vision.
• • •
The rule of law is in Ed Santow’s DNA. The son of a Supreme Court judge, Santow cut his teeth at the Public Interest Advocacy Centre (PIAC), an underfunded not-for-profit running social justice cases on behalf of the powerless. There he regularly locked horns with then attorney-general George Brandis, who attempted to give effect to his government’s distinctly illiberal instincts. When Brandis offered him a job at the Human Rights Commission, Santow was blindsided at first but saw it as an opportunity to chart an inside track.
In his new role Santow has taken it upon himself to review the impact of technology on human rights. He argues that if an automated decision-making system unfairly disadvantages a category of people, it simply shouldn’t be allowed. This would mean the conscious de-prioritisation of some types of data that could be collected. It would mean a focus on not just the form, but also the effect, of an automated decision. It also means the recognition of the biases inherent in the designers of source codes. But underlying these consequences is an even more radical idea: AI can’t developed in a vacuum—it has to comply with existing laws.
The landmark indirect discrimination case run by Santow’s old crew at PIAC established that systems that inform decisions can discriminate. The case involved the Illawarra Steelworks, which through the 1980s finally allowed women to enter the male-dominated plant. When the recession hit, the company took the well-worn ‘last on, first off’ approach to redundancies, meaning the lady tradies were the first ones out. The case went all the way to the High Court, where it was held that the system informing the decision breached the law because the women would bear an inordinate burden. It was not a conscious decision to target the female workers on the part of the company, but the logic of the system led to that unfair outcome.
If governments make decisions based on algorithms, Santow argues, they need to ensure the systems they design, like the one used by Illawarra Steelworks, treat people fairly. This is not a hypothetical. The lab rats of automatic government, recipients of welfare payments, are an inherently disadvantaged group, who skew female and Indigenous. Under the so-called robo-debt program, algorithms match Newstart and ATO data to identify and recover payments made to people who it calculates shouldn’t have received them. These notices, which are often wrong, cause widespread upset and suffering. And because the systems reverse the onus of proof on to the recipient and the people who are wronged lack the power to do much about it, the entire design of the scheme could be compromised because it could be held to indirectly discriminate.
Designing ethical systems is important, Santow believes, because automated decision-making is fundamentally different to human decision-making. ‘Humans have always prized deductive reasoning, especially in making big decisions,’ he says. ‘We observe a cause and an effect and draw a conclusion based on that evidence.’ Machine thinking is different. It makes us far more reliant on inductive reasoning than at any time in human history. Inductive reasoning takes a big pool of data and looks for correlations. This often yields really interesting patterns, but that is all they ever are: patterns. For example, the majority of murderers may wear shoes over size 10, but does that create a causal link between big-footedness and homicide that justifies targeting big-foots?
In the real world it is not just government that has to comply with anti-discrimination law. What of the bank that rates men a better loan risk than women because of observable data showing likely career trajectory, breaks for children and average retirement age. If these are relevant inputs, shouldn’t they be required to balance it with the stats showing women stay in jobs longer? Or that they are less likely to be an office harasser? The factors considered relevant for input will determine the fairness of the output.
What about the health insurer that has a model that determines an Indigenous person is on average more like to die young. Should this be grounds for a higher premium? Or the automated job-search engines that could easily weed out applicants on grounds of age, gender or postcode, or a combination of these that don’t meet the predictive model for a successful hire?
If Santow’s thesis holds—it’s being vigorously tested by some of the sharpest minds around the world—the development of AI will evolve with guardrails in place. ‘Just because we can’ will no longer be a good enough excuse. At least, not in Australia.
‘The core values of any country are encoded in its laws,’ he says. ‘If we want technological change to reflect our values, we need to insist that those pushing forward this change comply with our basic principles.’ This shouldn’t be a radical proposition. The wave of social reform arising from the last major technological disruption, the Industrial Revolution, rejected the Hobbesian world view that only the strongest should prevail, and to hell with everyone else. By contrast, a liberal democracy prizes equality of opportunity and combats the most powerful riding roughshod over everyone else.
Santow’s thesis has broader consequences for the information economy and its underlying assumptions because if the laws of the land should apply to new technology, then so much of what has come to seem normal would never have been allowed. For example: a cool new app that allocates and directs work would have to comply with industrial laws; a ride-sharing service would have to comply with taxi regulations; a hip new service that allowed travellers to book holiday beds would have to meet the same health and zoning requirements as existing accommodation businesses. There are examples everywhere. There are laws governing the surveillance of employees in the workplace, yet companies routinely fit field staff with a GPS to track their movements. There are rules around the conduct of our elections, demanding truth in advertising, yet Facebook, which has hoovered up a dominant cut of the election advertising budget, asserts that checking facts is against its mission of freedom; or at least against its business model.
‘A lot of people think that dealing with technological change is all about coming up with new rights and legal principles,’ Santow says. ‘Let’s start by rigorously applying the laws that have proven remarkably enduring over many decades in protecting our basic rights.’
• • •
It says something about the ubiquity of the web and the near-universal endorsement of its mission to connect that it’s taken more than two decades to build a political critique more sophisticated than the banal libertarianism that has become its ideological motif. The principle of the web is radical openness, in part a product of its moment in history, but more profoundly a self-serving belief among many of the gods of Silicon Valley that government was a hindrance to creativity and enterprise. To the extent there is philosophy in Silicon Valley it is the radical individualism of Ayn Rand, that individuals untethered from the state drive the path of progress and, with it, history.
In this world the notion of rugged individuals entering a virgin wilderness and using their ingenuity to harvest the available resources and to reap the benefits of their ingenuity makes perfect sense. In this world the only role of government is to subsidise the infrastructure and then get out of the way. So much of the Australian political debate has been focused on this ‘how’, the various iterations of the NBN, passionate debate about high-speed fibre, with an assumption that the ‘what’ would take care of itself. Moments of government intervention, for example Howard government minister Richard Alston’s ill-fated attempt to filter inappropriate content, were ridiculed as ‘censorship’ and proof he didn’t get it.
Rather than setting guidelines on how the web should evolve, government became a willing partner in expanding its remit. In the wake of the September 11 attacks, the Bush administration worked with telecommunications providers, platforms and websites to collect and analyse citizen information all in defence of the homeland. The extent of this secretive and coordinated effort, across national boundaries including Australia’s, was exposed by Edward Snowden only in 2013.
As governments were working to expand their eyes, tech businesses decimated by the millennial tech wreck were seeking a new business based on the analysis and monetisation of this surveillance output. Google was a pioneer in its capture and resale of search behaviour. Cookies, embedded codes that, inexplicably, are permitted to attach themselves to a user and follow them around the web, were a weapon. And when Facebook became the social network of choice, that process went in-situ, users happily offering every intimate personal detail for analysis and repackaging.
The growing awareness of the potential of the web, not just to connect people but also to collect and exploit their personal information, has informed a new wave of analysis of the web and the impact of its operations. The sharpest has been the work of Shoshana Zuboff, a Harvard Business School academic who in early 2019 framed the notion of ‘surveillance capitalism’.
Her take on the digital economy, elaborated in her book of that year, The Age of Surveillance Capitalism, is essentially Marxist: our behavioural surplus, personal data consciously entered and, more profoundly, the footprint of our incidental digital activity are rendered and commercialised. What look like free and open platforms are sophisticated extraction devices, aimed at acquiring massive troves of raw data at scale. This harvest creates the massive banks of data that engineers mine, scientists repurpose and behavioural psychologists deploy in order to create new forms of value, be they direct marketing, behaviour modification or, most insidiously, the pursuit of human redundancy. We users are the digital proletariat who are exploited in the production and the deployment of this resource.
This is the logic driving the so-called ‘Internet of Things’: smart devices that will power our homes and neighbourhoods while collecting data on everything in reach. Amazon’s Alexa home organiser is a harbinger, Zuboff argues; it will always respond to your call because it is always listening and as it listens it is capturing every word that you utter in its vicinity. Personal assistants are the new frontier of behavioural surplus, where the dark data continent of your inner life—your intentions and motives, meanings and needs, preferences and desires, moods and emotions, personality and disposition, truth telling or deceit—are all summoned into the light for profit.
Zuboff argues that to derive full value from these extraction technologies, surveillance capitalism has to free itself of the constraints of government. ‘Surveillance capitalists are no different from other capitalists in demanding freedom from any sort of constraint,’ she writes. ‘They insist upon the “freedom to” launch every novel practice while aggressively asserting the necessity of their “freedom from” the law and regulation.’
The road map is well worn: assert sovereignty, set up shop and portray oneself as an outsider up against the established vested interests. Purchase the services of the best spin-doctors, lawyers and lobbyists to minimise the extent of any claw-back regulation. Pledge allegiance to shareholders rather than to any national interest (and avoid wherever possible paying tax). In short: be a law unto yourself.
• • •
If terra nullius reigns in Australia’s cyber space, Lizzie O’Shea is putting together what could, in its impact, become the internet’s equivalent to the Mabo case. An intense, committed, campaigning lawyer, O’Shea sits of the board of Digital Rights Watch, an activist group that advocates against state and corporate surveillance. She has also written an earnest but thoughtful book, Future Histories, about what past social movements can tell us about responding to technological change. Spoiler alert: the lesson is to stand up to bullies, and right now that’s exactly what she’s doing. Now, with her firm Maurice Blackburn, O’Shea is mounting a class action against Uber.
Class actions are the backstop of the legal system. A group of individuals sue a powerful entity for economic loss caused by wrongdoing. In an age where the penalties for corporate crime are often so weak as to be symbolic, the threat of big class actions can force changes in corporate culture because they shift the risk needle. They are long-lead affairs, with high upfront investments in identifying plaintiffs and preparing legal documentation. But when they land, they are whales that can bring companies to account and change the decision-making matrices for all their competitors.
Over the past decade Maurice Blackburn targets have included a rollcall of public misfeasance, from the Black Saturday bushfires class actions to Centro, NAB and Colonial First State Super. Where wrongdoing has led to economic loss for disaster victims, consumers or shareholders, class actions provide a path to restitution. But the Uber case is way more significant than payback from a group of disgruntled investors: it goes to the heart of the modus operandi of technology start-ups and the human impact of their ‘disruption’.
Lead plaintiff Nick Andrianakis is one of hundreds of cabbies whose lives were ripped apart when the car-riding service drove a stake through the taxi industry. The value of the ‘plate’, the licence to operate a taxi, plummeted hundreds of thousands of dollars until they were worthless. Uber’s game plan was always to establish a beachhead and then pay lobbyists to convince government that they should be allowed to operate with some pared-back regulation. It was a strategy that ultimately succeeded for Uber, if not for Nick.
But what Uber didn’t account for was a principle embedded in Australian common law. The tort of ‘conspiracy’ is based on the notion that if someone sets out to break a law and does you subsequent damage, then you should have the right to seek compensation for that loss. That’s what the cabbies are arguing: as a direct result of this illegal activity, they suffered loss to their livelihood because Uber intentionally broke the law.
Conspiracy is a challenging tort because people generally don’t make the conspiracy public. But Uber has been upfront about all of this, proud of their strategy. In the start-up phase there were reports of drivers burning through phones to avoid detection and offers from the company to pay the fines of those caught breaching regulations. Uber talked about its disruptive strategy in its prospectus, not realising it was opening itself to damages claims.
In 2017 it was discovered that Uber had been using a secret program to deter law enforcement in countries, where it was operating outside regulation, including Australia. The program, called ‘Greyball’, mashed data from locations, credit cards and social media accounts to identify local officials and divert drivers from their calls, on the suspicion it would be part of a sting. This is no simple program, it is a concerted effort to avoid enforcement checks. When caught, Uber claimed it was all about identifying fraudulent users, although it lumped local enforcement ‘colluding with the taxi industry’ in that group of undesirables. From any rational perspective only a business knowingly operating outside the law would go to that sort of trouble.
‘It might be inspirational to call on your workers to “move fast and break things”, but when things break there have to be consequences,’ O’Shea says. ‘The impression among start-ups is you don’t need to comply with the law, but that’s not how democracy works.’ Rather, the rules are there for a reason. We design public policy to reflect our values, ensuring for example that people with a disability can access transport. When you allow operators not to comply you bypass the democratic policymaking process.
In the intricacies of the detail, it’s easy to lose sight of the big picture. The case will be heavily contested, but if the taxi drivers prevail this will set a new common law standard for digital disruption; one where a discussion on human cause and effect will become part of the engineering challenge of designing a new product. At the very least the disruptors might have pause to think for a moment of the consequences of their disruption. If the damage of ignoring laws can be quantified then the victims have a voice and consequences will have to be factored into the start-up business plan. At least in Australia.
• • •
The idea of the World Wide Web is collapsing before our eyes. The US mothership remains a bastion of hyper-capitalism, but there are alternatives emerging. China has carved its state-controlled network behind the great firewall of China; Russia’s web is an arm of Putin’s kleptocracy; Europe, as Europe does, is attempting to establish a set of admittedly still-inadequate rules to govern the flow of data. The developing world is up for grabs, with the provision of communications infrastructure emerging as a new arm of geopolitical expansion.
Australia too is beginning to imagine how its web will evolve. It refused to allow Huawei, a Chinese government–owned company, to build our 5G network. Its ill-fated My Health Record proposed that online medical records would not be accessible outside our national border. Demands are being made on global platforms such as Facebook to play by our values, particularly in the wake of the live-streaming of the Christchurch atrocity. These assertions of national sovereignty are still largely reactive, a search for a response to a new problem. But there is an alternate approach, which starts with Ed Santow’s proposition that we already have the laws we need, we just need to ensure they are observed.
Maybe as the web fragments, there is an opportunity to carve out an Australian version of cyberspace. It would be a place where our existing laws and customs are respected; an Antipodean sense of place and story and identity, the sort of mash-up of Indigenous songlines, European development and the reality of Asian integration that Keating hinted at as prime minister and for a few short years came close to realising. Maybe if we just lift our heads from our screens for a few minutes we can attain the clarity to put these pieces together.
Or at least start by naming the challenge.
The gift of the common law, in particular, is that it is part of democracy’s foundation, the accumulation of judgement by those we entrust to pass judgement. Yes, it can be overridden by legislation, but left to its own devices, it grows and evolves and ensures our democratic traditions continue to leave their mark on our society. From Mabo came the Wik decision, from Wik came a structured settlement of title, and while the problems of Indigenous Australia will not be settled by a court case, it has changed the context, has at least opened the way for more transformative conversations such as a voice to parliament, constitutional recognition and fleeting moments of human empathy.
What could a treaty with our tech titans look like? Control over our personal information would be declared an inalienable right, so we can exercise our ‘freedom of reach’ over where our data goes and we would have the right to access it and the right for it to be forgotten when we don’t want to share it any more. Our children would be protected from exposure to behaviour modification as they are going through the precious education journey. We would normalise active vigilance of our democracy and our institutions.
Like Australia’s own national story, it’s not just about being held to account for what has already happened, it’s also about taking responsibility for what comes next. As the World Wide Web fragments, we could build our own competitive advantage if we took up the challenge of designing a network that reflected the best of us. Ed Santow’s entreaty is to innovate while promoting respect and fairness, not just because it is the ethical or legal thing to do, but because it is the smart thing to do. Lizzie O’Shea’s warning is that if this doesn’t happen there will be a reckoning. For those ready to imagine a future where technology works for us, the Australian people, the good news is that the building blocks are already in place.
Peter Lewis is the director of Essential Media, a progressive strategic communications agency and also the director of the Australia Institute’s Centenaries for Responsible Technology. His book Webtopia: The Worldwide Wreck of Tech and How to Make the Net Work was published by NewSouth last year.