4 min read

Artificial intelligence is becoming a true industry according to a sweeping new report

Artificial intelligence is becoming a true industry according to a sweeping new report

Artificial intelligence is becoming a true industry, with all the pluses and minuses that entails, according to a sweeping new report.\n\nWhy it matters: AI is now in nearly every area of business, with the pandemic pushing even more investment in drug design and medicine. But as the technology matures, challenges around ethics and diversity grow.

Driving the news: This morning, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) released its annual AI Index, a top overview of the current state of the field.\n\nA majority of North American AI Ph.D.s — 65% — now go into industry, up from 44% in 2010, a sign of the growing role that large companies are playing both in AI research and implementation.\n\”The striking thing to me is that AI is moving from a research phase to much more of an industrial practice,\” says Erik Brynjolfsson, a senior fellow at HAI and director of the Stanford Digital Economy Lab.\nBy the numbers: Even with the pandemic, private AI investment grew by 9.3% in 2020, a bigger increase than in 2019.\n\nAs a whole, AI contains many subfields, including natural language processing, computer vision, and deep learning. Most of the time, the specific technology at work is machine learning, which focuses on the development of algorithms that analyzes data and makes predictions, and relies heavily on human supervision.\n\nFor the third year in a row, however, the number of newly funded companies decreased, a sign that \”we’re moving from pure research and exploratory small startups to industrial-stage companies,\” says Brynjolfsson.\nWhile academia remains the single-biggest source worldwide for peer-reviewed AI papers, corporate-affiliated research now represents nearly a fifth of all papers in the U.S., making it the second-biggest source.\nThe drug and medical industries took in by far the biggest share of overall AI private investment in 2020, absorbing more than $13.8 billion — 4.5 times greater than in 2019 and nearly three times more than the next category of autonomous vehicles.\nThe catch: While the field has experienced sudden busts in the past — the \”AI winters\” that vaporized funding — there’s little indication such a collapse is on the horizon. But industrialization comes with its own growing pains.\n\nSMU Assistant Professor of Information Systems, Sun Qianru, likens training a small-scale AI model to teaching a young kid to recognize objects in his surroundings. \”At first a kid doesn’t understand many things around him. He might see an apple but doesn’t recognize it as an apple and he might ask, \”Is this a banana?\” His parents will correct him, \”No, this is not a banana. This is an apple.\” Such feedback in his brain then signals to fine-tune his knowledge.\”\n\nCutting-edge AI increasingly requires huge amounts of computing and data, which puts more power in the hands of fewer big players.\nConversely, the commoditization of AI technologies like facial recognition means more players in the field, both domestically and internationally, which makes it more difficult to regulate their use.\nAs AI grows, the ethical challenges embedded in the field — and the fact that 45% of new AI Ph.D.s are white, compared to just about 2% who are Black — will mean \”there’s a new frontier of potential privacy violations and other abuses,\” says Brynjolfsson.\n\nTraining an AI model\n\nBecause of the complexity of AI, Professor Sun ventures into general concepts and current trends in the field before diving into her research projects.\n\nShe explains that supervised machine learning involves models training itself on a labeled data set. That is, the data is labeled with information that the model is being built to determine, and that which may even be classified in ways the model is supposed to classify as data. For example, a computer vision model designed to identify an apple might be trained on a data set of various labeled apple images.\n\n\”Give it data, and the data has labels,\” she explains. \”An image could contain an apple, and the image goes through the deep AI model and makes some predictions. If the prediction is right, then it’s fine. Otherwise, the model will get computational loss or penalty to backpropagate through to modify its parameters. And so the model gets updated.\nWhy it matters: AI is now in nearly every area of business, with the pandemic pushing even more investment in drug design and medicine. But as the technology matures, challenges around ethics and diversity grow.\n\nDriving the news: This morning, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) released its annual AI Index, a top overview of the current state of the field.\n\nTraining an AI model\n\nBecause of the complexity of AI, Professor Sun ventures into general concepts and current trends in the field before diving into her research projects.\n\nShe explains that supervised machine learning involves models training itself on a labeled data set. That is, the data is labeled with information that the model is being built to determine, and that which may even be classified in ways the model is supposed to classify as data. For example, a computer vision model designed to identify an apple might be trained on a data set of various labeled apple images.\n\n\”Give it data, and the data has labels,\” she explains. \”An image could contain an apple, and the image goes through the deep AI model and makes some predictions. If the prediction is right, then it’s fine. Otherwise, the model will get computational loss or penalty to backpropagate through to modify its parameters. And so the model gets updated.\”\n\nThe AI Index found that while the field of AI ethics is growing, the interest level of big companies is still \”disappointingly small,\” says Brynjolfsson.\nDetails: Those growing pains are at play in one of the most exciting applications in AI today: massive text-generating models.\n\nSystems like OpenAI’s GPT-3, released last year, swallow hundreds of billions of words along the way to producing original text that can be eerily human-like in its execution.\nText-generating AI models could help polish human-written resumes for job search, but could also potentially be used to spam corporate competitors with realistic computer-generated applicants, not to mention warp our shared reality.

”What we increasingly have with these models is a double-edged sword,\” says Kristin Tynski, a co-founder and senior VP at Fractl, a data-driven marketing company.

What to watch: The growing geopolitical AI competition between the U.S. and China