Productive AI models should expect increasing resistance | Tech Rasta

[ad_1]

A few weeks ago, in IT Matters I wrote about foundational models, a completely new approach to artificial intelligence (AI). Foundational models are popularized by traditional methods of training AI programs with small data sets. They thought it would be a game changer. Foundation models are also known as neural networks and ‘generative AI models’. These are the new buzzwords and I have seen many startups that want their investors (venture capital firms, angel investors and the like) to believe that their work is based on such generative AI models.

The most important feature of generative models for AI is that they search almost every bit of information available on the web, a data store that doubles in size every two years, and then use the output to train AI programs to generate the output. According to the two-year doubling rule defined by Live-counter.com, which tries to track the size of internet data, that number is close to 80 zettabytes. A zettabyte is one trillion gigabytes.

Open AI, heavily supported by Microsoft, has two such models: one is GPT-3, which is primarily for documents, and the other is DALL-E, which focuses on images. GPT-3 analyzed thousands of digital books and nearly a trillion words posted on blogs, social media, and elsewhere on the Internet. Its competitor is Google, whose own offering in generative AI is called BERT.

In contrast, more focused cognitive models consist of smaller data sets (some of which are also filled with dummy data) that are used to train AI programs for specific use cases. For example, the medico-radiological system is limited to X-rays, magnetic resonance imaging (MRI) scans, and other medical images and does not train on poetry or music and other unrelated information. Work in hand.

Sponsors of generative AI have a lofty goal. The idea is to create a foundation for all kinds of AI applications that can be written on that base. There are several problems with this ‘sledgehammer to kill a fly’ approach that I have touched on in previous columns. Today, however, I want to dwell on two opposing views—albeit from different industries—regarding the impact that generative AI models are having. One thinks they’re a flash-in-the-pan, the other is girding for a fight.

OpenAI’s text-to-image generator is called DALL-E. According to Techcrunch.com (tcrn.ch/3DL2TyF), its best-known rival, newcomer Stability AI, pulled $101 million in funding for its stable diffusion system. In response to this funding news, Futurism.com reports that productive AI is all flash and no substance, according to Will Manidis, founder and CEO of AI-driven healthcare startup ScienceIO. Although it’s attracting VC cash now, Manidis points out that many ventures are quickly discarded.

Manidis’ argument centers on text-to-image generators such as DALL-E. He believes there really isn’t much room for growth in the “creator economy.” Yes, creating AI-made artifacts is fun and sometimes useful, but according to him, turning everyone into creators isn’t really a big deal. New income streams. He explained his views on the matter in a Twitter thread on October 25 (bit.ly/3frRfPU).

According to his thread, “Every year billions of hours of human energy are wasted on trivial tasks. data entry, form filling, basic knowledge work kind of stuff,” and these foundational models may have much better uses here (in base automation of manual tasks) than in more refined use cases like medical technology. One might argue that this argument is a little self-serving, but it has merit. Not without, for example, data entry and expert analysis of medical images are separate.

Meanwhile, at the other end of the spectrum, it seems all is not well for generative AI models, with the music industry up in arms against it.

AI generator tools can create brand new music tracks with the click of a button and start threatening musicians’ livelihoods. This caused the lobbyists to become very worried. For example, the Recording Industry Association of America (RIAA) is concerned that AI-based music production could threaten the income and rights of human artists.

The RIAA has a long history of fighting piracy and counterfeiting—first by destroying illegal digital copies made on CDs and other media, and in efforts to enforce digital copyrights for musicians. According to RIAA filings with the Office of the US Trade Representative, the US music industry contributes $170 billion to the US economy, supports 2.47 million jobs and accounts for 236,000 businesses in the US. The RIAA states that for every dollar of direct revenue in the US music industry, an additional 50 cents is generated in adjacent industries. Digital sources account for about 90% of music revenue, while physical products such as CDs account for 10%, a clear cross-over made possible by products such as the original Apple iPod and later the smartphone.

The RIAA is now gunning for online services that use generative AI extracts and remixes recordings in the style of popular human artists. The lobby group claims these services infringe copyrights and directly pinch the pockets of its members. Given that the RIAA has been successful in protecting its members in the past, those using generative AI for music production have cause for concern.

Siddharth Pai is the co-founder of Siena Capital and author of ‘TechProof Me’.

Watch all business news, market news, breaking news events and latest news updates on LiveMint. Download Mint News app to get daily market updates.

More is less

[ad_2]

Source link