Hello dear readers,
I want to thank Aleksey Oreskovic and Kevin Kelleher for filling in for me while I was away.
There is a lot going on in AI and this newsletter aims to bring you the most important updates for the business reader every week, sometimes it helps to step back and dive deeper. Exactly the same of Fortune The upcoming Brainstorm AI conference will enable you.
In person in San Francisco on December 5thTh and 6Th, Brainstorm AI helps illuminate the most immediate opportunities for companies hoping to use AI to transform their businesses, while also highlighting some of the most important challenges. I will be one of the conference co-chairs and I hope you will consider joining me. With AI reader in mind, I am pleased to offer you a special discounted rate of 20% off the regular registration fee. Use the code EOAI In the additional comments section of the registration form.
We have an amazing lineup of speakers for you. These include high-level executives from Meta, Google, Nvidia, Wayfair, Microsoft, Apple, Land o Lakes and Capital One. Senior business leaders from Walmart, eBay and Expedia talk about how AI is supercharging their operations.
We have a clutch of AI experts who will tell you where AI is headed and what you need to do to ensure you can build a successful strategy around the technology. Among them is Fei Fei Li, co-director of Stanford University’s Institute for Human-Centered AI, who says, “The Human Factor of AI and Andrew Ng, founder of Landing AI, tell us about the key shift companies are making. Making Big Data to Good Data.
Kevin Scott, Microsoft’s Chief Technology Officer, discusses the advent of large language paradigms and their impact on business. Robotics expert Peter Abbil talks about how robots are poised to transform the workforce. Joel Pino from Meta’s AI Research Lab outlines the key lessons the social media giant has learned on how to effectively use AI. Colin Murdoch, Chief Business Officer of DeepMind, reveals how the cutting-edge research lab turns scientific breakthroughs into real business ideas for Google.
This is your chance to interact with some of the world’s top AI leaders and get your questions answered on how to use this powerful emerging technology in your own organization. How can you use AI to increase revenue and profits? How should AI be managed? How do you use AI ethically and responsibly? How can AI improve supply chain management? How can this change the retail experience? We cover all this and more. Plus, there will be plenty of time for networking and sharing experiences with each other.
If you’re interested—and hopefully you are—please apply to attend here (click the red “Register Now” button at the top of the page). (And remember again to use the code EOAI in the additional comments field of the application to get your exclusive eye on the AI reader discount!)
Now here’s the rest of this week’s AI news.
AI in the news
The British data regulator has warned against “emotional” recognition technology. The UK’s Information Commissioner’s Office has issued a warning that companies should avoid using AI that claims to be able to identify people’s emotions based on facial expressions, saying there is little scientific evidence to support the technology’s claims. Stephen Bonner, the ICO’s deputy commissioner, said companies that ignore the warning and use emotion recognition software for important decisions such as screening job applicants or detecting fraud could face fines. the guardian
Generative AI startups are raking in venture capital dollars. That’s according to an article The Wall Street Journal, slated for a $125 million Series A funding round for Jasper, an Austin, Texas-based startup that uses AI to autogenerate blog and marketing copy. (Some of Jasper’s back-end magic is actually handled by OpenAI’s GPT and DALL-E generation models.) According to the paper, fundraising has valued Jasper at more than $1 billion. It cites Stability AI, which recently raised a $101 million seed round for its image generation technology, as well as Replikr, Musico and GoCharlie.AI, as generative AI startups that have seen recent interest from investors.
Biotech startup begins human clinical trials of ALS drug discovered with AI Verge Genomics, backed by pharma giants Eli Lilly and Merck, as well as private equity group BlackRock, has begun human clinical trials of a drug for the neurodegenerative disease amyotrophic lateral sclerosis (ALS, sometimes called “Lou Gehrig’s disease”). Discovered with the help of AI, The Financial Times reported. The company used machine learning to analyze millions of datapoints and discover new causal mechanisms implicated in ALS that it discovered how to target with a new drug it was trialling. Using AI has shortened the initial research and testing period for a drug before human clinical trials to four years, a phase research would normally take in half that time, Werge said. as FT Several other AI-assisted drug discovery companies, including Exyntia, Insilico Medicine and Evotech, now have at least one drug in human clinical trials. One of Insilico’s drug candidates is also for ALS.
Eye on AI talent
BigBear.aihired an AI software company based in Columbia, Maryland Amanda Long Like that Chief Executive Officer, publication Washington Technology Reports. A long time Vice President of IT Automation IBM.
An eye on AI research
DALL-E makes nice pictures, but it doesn’t actually “understand” the language. Generative AI systems such as OpenAI’s DALL-E 2, which can be represented in natural language to generate images, are gaining traction. But it’s easy to overestimate how smart these AI models really are. Evelina Levada of the Universitat Rovira i Virgili in Tarragona, Spain, Elliott Murphy from the University of Texas Health Science Center in Houston, and deep learning skeptic Gary Marcus from New York University collaborated on a paper in which they demonstrated that the doll. -E 2 “Incorrectly understood many natural language prompts such as binding principles and coreference, passives, word order, coordination, comparatives, negation, ellipsis and structural ambiguity”. The researchers point out that most young children can master these problems in language, and DALL-E 2, trained on billions of images and titles, could not.
For example, prompt DALL-E with “Dog chasing man” and at least some of the images will show a person behind the dog and very similar to the images produced by “Man chasing dog” (also). Don’t reliably show the person behind the dog). Prompt DALL-E for “broken by a woman” and DALL-E will generate plenty of images of what appears to be a perfectly intact vase placed next to a woman. Comparative prompts such as “There are more cucumbers in the bowl than strawberries” often produced pictures that clearly contained more strawberries than cucumbers. The paper includes more examples, which can be found here at arxiv.org, a non-peer reviewed research repository.
The researchers concluded: “In our view, all the recent attention placed on predicting sequences of words has focused on developing a theory of how such processes end up in cognitive models of the world and how syntax serves to control form-meaning mappings. For example, a recent account claims that linguistic patterns represent ‘conceptual character’ meanings, as these can be inferred from relations between internal representational states. Our results show that such representations, insofar as they exist, are inadequate.“
Fortune on AI
Ford and Volkswagen pull the plug on robocar unit Argo AI in major setback to their self-driving plans—By Keith Naughton, Monica Raymunt, and Bloomberg
3 Reasons Why Intel’s Mobileye IPO Flopped —by Christian Hetzner
Commentary: How Digital Twin Technology Can Close America’s Chip Manufacturing Gap—by Chris Rust
Over-the-counter sales could usher in a boom time for AI-powered hearing aids—Kevin Kelleher
Dr. Doolittle here we come. Well, at least that’s what some of the most hyped headlines about using AI to “talk to animals” would have you believe. Reality, as usual, is more complicated—and less in Dr. Doolittle’s field—and less exciting for science. Biologists are using machine learning and new sensor technology to analyze and understand animal communication—everything from the dances of bees to the low-frequency infrasounds of elephants. Karen Bakker, a researcher at the University of British Columbia, has released a new book The Sounds of Life: How digital technology is bringing us closer to the world of animals and plants This describes several attempts. In an interview with Vox this week, Bakker suggested that technology could allow us to communicate back with animals in some limited cases. She cites an experiment in which scientists in Germany studying bees used a bee-sized and shaped robot to mimic the waggles that bees use to signal to other bees the location of good flowers to gather.
As Bakker Vox says: We can use artificial intelligence-enabled robots to speak animal languages and essentially break the barrier of interspecies communication. Researchers are doing this very rudimentarily with bees and dolphins, and to a lesser extent with elephants. Now, this raises a very serious moral question, because the ability to talk to other species sounds intriguing and endearing, but it can be used to create a deep sense of kinship or a sense of dominance and manipulative ability to domesticate wild species. As humans we were unable to control it before.
Our mission to improve business is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.