Tech firms say laws to protect us from bad AI will limit ‘innovation’. Well, good John Naughton | Tech Rasta

WIn May 2014, the European Court of Justice issued a landmark ruling that European citizens have the right to request search engines to remove search results linked to material lawfully posted on third-party websites. This has been popularly but misleadingly described as the “right to be forgotten”; It is indeed a right to have certain published material about the complainant removed by search engines, of which Google is the most dominant. Or, more crudely, the right not to be found by Google.

The morning of the verdict, I got a phone call from a senior Google employee I knew. It was clear from his call that the company’s management had been ambushed – something its expensive legal team apparently hadn’t anticipated. But it was also clear that his US superiors were outraged at the mere tyranny of a European institution in issuing such a ruling. And when I hinted mildly that I considered it a reasonable judgment, I was met with a vigorous rebuttal, the essence of which was that the problem with Europeans was that they were “hostile to invention.” At that point the conversation ended and I never heard from him again.

It’s reminiscent of tech companies’ reaction to a draft EU bill published last month that, when it becomes law in two years, will make it possible for people injured by software to sue companies that produce it. And implement it. The new bill, called the AI ​​Liability Directive, will complement the EU’s AI legislation, which will become EU law at the same time. The laws aim to prevent tech companies from releasing dangerous systems, such as: algorithms that promote misinformation and target children with harmful content; facial recognition systems that are often discriminating; Predictive AI systems used to approve or deny loans or guide local policing strategies are less accurate for minorities. In other words, technologies that are currently almost completely unregulated.

The AI ​​Act mandates additional checks for “high-risk” uses of AI that have the highest potential to harm people, particularly in areas such as policing, recruitment and healthcare. A new liability bill, says MIT Technical review Journal, “Gives individuals and companies the right to sue for damages after being harmed by an AI system. Their goal is to hold developers, producers, and users of technologies accountable and explain how their AI systems are built and trained. Tech companies that fail to comply with the rules are at risk of EU-wide class actions.

On cue, the Computer & Communications Industry Association (CCIA), a lobbying outfit representing tech companies in Brussels, steps up. Its letter to the two European commissioners responsible for the two measures immediately raised concerns that imposing strict liability on technology firms would be “disproportionate and inappropriate for the characteristics of software”. And, of course, it has a “chilling effect” on “innovation”.

That’s yes. The same discovery and live streaming of mass shootings that led to the Cambridge Analytica scandal and Russian online interference in the 2016 US presidential election and UK Brexit referendum. The same invention behind the recommendation engines that radicalized extremists and the “10 Depression Pins You’ll Like” later ended its life as a troubled teenager.

It is difficult to decide which of the two arguments made by the CCIA – that strict liability for software is “inefficient” or that “innovation” is a defining characteristic of the industry – is more absurd. For more than 50 years, the tech industry has been granted latitude not extended to any other industry, namely avoiding legal liability for its core product’s myriad defects and vulnerabilities, or for the harm those defects cause.

Even more remarkable is how tech companies’ claim to be the sole masters of “innovation” has long been taken at face value. But now two prominent competition lawyers, Ariel Ezrachi and Maurice Stuck, have called the companies’ bluff. In a remarkable new book, How big-tech barons smash innovation – and how to hit back, they explain how innovation that suits their own interests is only tolerated by tech companies. How technology firms are ruthless in stifling disruptive or threatening innovations through preemption or naked copycatting, and how their dominance of search engines and social media platforms restricts the visibility of competitive or socially useful promising innovations. As an antidote to tech puffery, a book is hard to beat. It is required reading for everyone at Ofcom, the Competition and Markets Authority and DCMS. And henceforth “invention for whom?” That should be the first question any tech booster talks about when talking about innovation.

what am i reading

The circle of time
The Internet’s frustrating problem of keeping time is fascinating The New Yorker Several years ago, an article by Nate Hopper on the genius who created an arcane software system that synchronized network clocks.

Trussed up
Project Fear 3.0 is an excellent blogpost by Adam Tooze on criticism of the current Tory administration.

Tech advances
Ascension is a thoughtful essay by Drew Austin on how our relationship with digital technology will change in the period 2019-2022.

Source link