Successful adoption of AI requires training both the staff needed to build or use the system as well as the AI application itself. Some of the solutions to these two problems are still being worked out.
Among the biggest challenges to successfully adopting AI in a business setting are two related to training. The first is training or finding personnel who are AI-savvy. The second is finding enough good data to train an AI application system once you have one. Looking at some of these challenges and their possible solutions is the most constructive path to follow.
RELX, a London-based source of technical information and analytics tools that serves multinational corporations, publishes an annual report for tech execs that looks at AI challenges. The 2019 report, for example, showed 71 percent of execs at companies adopting AI citing the talent shortage as a major problem, 21 percent citing "low data readiness," and 29 percent pointing to budget limitations as a major stumbling block.
The People Paradox
The 2021 RELX Emerging Tech Executive Report, released in November, shows 2019's 71 percent jumping to 95 percent of execs surveyed in 2021 saying finding and retaining AI talent is a significant challenge (even as usage of AI in some form is pegged at between 85 and 90 percent for surveyed industries overall).
Even more surprising, 39 percent of those surveyed said AI has had a negative impact on their industry this past year. This is because, in a year in which 4 million workers left their jobs in July 2021 alone, the need to upskill existing workers (or to find new workers with needed skills) has become an even greater burden than it was in 2020.
This isn't due to lack of faith in the usefulness of AI, though. The study reports use of AI up 33 percentage points since 2018, 48 percent of those surveyed saying they invested additional funds in AI this year because of the COVID pandemic, and 93 percent of execs polled saying AI makes their businesses more competitive.
Another odd-seeming aspect is that, according to the survey, in 2021 the number of companies investing in upskilling existing employees in AI declined from 65 to 56 percent over 2020, the number of companies investing in AI educational initiatives for their employees declined from 65 percent to 52 percent, and interest in hiring external talent to help with AI projects has declined from 59 percent to 52 percent.
Vijay Raghavan, technology forum director at RELX, blames American workers "reconsidering the role that work plays in their lives," increasing competition for already scarce talent resources, thus making some companies "hesitant to invest in upskilling their employees on the basis that they're liable to be lured away after a year or two by a rival company."
Data scientists in particular are needed in AI projects (not to mention any other kinds of IT-related efforts), especially to collect and cleanse data sets needed to train AI systems, analyze and interpret data to identify business opportunities, and create methods and models to extract information from Big Data. The U.S. Bureau of Labor Statistics pegged the mean annual salary of a data scientist at $103,930 in May 2020, and demand continues to escalate.
Consider also the trend often being noted in non-IT media of all kinds of office workers discovering during the COVID pandemic that working at home is more pleasant and convenient than working in a centralized office. What's more, many would like to continue doing that even when the pandemic eases. Combine this with the scarcity of AI-savvy workers that's already been underway for several years and it becomes clear that finding useful talent is going to require some original thinking. Fortunately, there is some to be had.
Methods for Deepening the AI Talent Pool
Gartner Group offers a policy solution to attract and retain data and analytics personnel that incorporates a few unusual strategies. The first is the idea that enterprises looking for new people consider the pool of potential employees that Garner describes as "neurodiverse," by which Gartner means people defined as having "bipolar disorder, dyspraxia, dyslexia, attention deficit hyperactivity disorder (ADHD) and Tourette syndrome." While the normal resume process might screen many such people out, Gartner thinks otherwise.
"Neurodivergent candidates are wired to think out of the box and gifted in skills that are essential for digital success. For example, people with ADHD have exceptional focus and problem-solving abilities. Similarly, autistic people are meticulous and have higher analytical thinking," Gartner's policy summary explains. "In many ways, the shift to remote work or a hybrid organization is favorable for neurodiverse candidates, as they can work in the comfortable setting of home. As they don’t have to experience the physical or spatial distractions of a traditional office, they can [more] productively execute their tasks."
In addition, Gartner recommends organizations more effectively recruit and retain talent via becoming "human-centric." Specifically, by offering more hybrid and remote working options, enabling employees to learn skills "that are useful outside of the organization," and showing "concern for their families and personal lives." It almost seems as if some of the attitudes expressed by The Good Doctor and A Christmas Carol are coming home to roost in the AI realm.
Gartner goes on to recommend retaining talent by fostering a more ethical company culture and promoting more data literacy. A number of blogs also suggest building internal AI teams by forming a coalition of the willing who have an interest in AI and bettering themselves--in effect, developing "citizen data scientists" from existing personnel. While that seems logical, it's also hard to square with RELX's more negative findings about the results of upskilling employees at companies that have already adopted AI in some measure. However, embracing this contradiction may be unavoidable.
A different suggestion from the blogosphere is to hire more data scientists or an AI team from outside. One possibility for overcoming the high salaries or fees of such experts is to have several smaller businesses with an interest in AI pool financial resources to hire an expert or a consulting firm collectively. Ideally, this would be several organizations that are not direct competitors but might have complementary lines of business or perhaps social connections between their executives or managers that might foster a sense of mutual trust.
Businesses that want to start small with a pilot project and learn as they build can adopt open-source AI apps that are available for free, or license applicable solutions that have been developed by larger AI innovators. This at least starts educating staff, builds internal confidence in AI generally, and can lead to other practical ideas.
The Training Data Problem
AI applications have to be trained in order to provide useful results. There's a whole process of machine learning an app must go through to learn your business and its general industry.
This can be a tough problem because it's not always clear what data is really needed or where to get it. One thing that is clear is that it's not enough to just feed in the past five years' sales records and expect to get reliable information on what your next marketing move should be. You need information on market trends in your industry, perhaps social media-based feedback on what consumers like and don't like about their most recent experiences buying products or services of the kind your enterprise offers, or consumer demographics, for example. And this is not to mention a whole host of data points that may be either unique or of special relevance to your type of business.
To begin with, someone needs to compile a list of data you already have access to. That will give you some clues about what data you still need to find. Other methods include examining websites of direct competitors and similar-sized companies that use AI to get some idea of how it's used in those enterprises' contexts. Explore whether there are universities or other resources in your area that you could form some kind of partnership with.
Where do you get more data, and how do you classify it? You could hire a data scientist or an outside consultant who is a subject matter expert (SME) in your industry, or you could try to develop SMEs among your own employees. All of these paths are expensive and are fraught with the pitfalls already mentioned.
There are a number of open-source data-mining apps that can help. Apache Mahout is a domain-specific language that helps data scientists and mathematicians implement their own algorithms. DataMelt helps analyze large data volumes using multiple computer languages. ELKI is a Java-based data miner. H2O is an open-source machine-learning platform. KEEL is a Java-based data-discovery tool. KNIME helps users build data-science workflows. MOA is a machine-learning tool for data streams. NLTK is a Python-based tool for natural-language data. Orange is a tool for creating interactive data visualization. Rattle is a GUI for data mining using the R programming language. R-Programming is a language for statistical programming and graphics. Scikit-learn offers tools for predictive data analysis.
(There are other fee-based data miners, analytics solutions, and machine-learning applications for AI too numerous to mention here.)
Synthetic data is a data-augmentation process that uses artificially created data rather than data generated by actual events for machine learning. Synthetic data for analysis helps further machine-learning processes in situations where actual data isn't available or might contain data that is normally prohibited for use, such as personally identifiable or health-related information.
Local Interpretable Model-agnostic Explanations (LIME) is an analysis method for data models that attempts to address the "black box" problem of AI, which is that most AI applications are unclear about how they reach the results they provide. Users may know the inputs and can see the outputs, but what goes on between those two points can't be independently verified because how the AI operates is usually treated as a trade secret. LIME tries to provide an explanation by extracting parts of a data set and attempting to explain those parts as a means of explaining the rationale behind a particular data model. By changing one input and seeing how it affects the output, and doing that many times over, it provides a way of checking the validity of a data model without knowing exactly how an AI application reaches its conclusions.
Don't Stand Still
Bringing AI to an existing organization presents numerous challenges. Some of these, at least at the present time, may seem contradictory or insurmountable. However, AI is becoming so hot that standing still because you don't know which way to move is equivalent to falling behind. Waiting another year or two to get started with AI might prove to be a fatal delay, depending on what industry you're in. Jumping in and attempting to unravel the Gordian knots this technology presents is necessary if you want to keep your enterprise engaged, even if solutions today aren't always as clear as one might hope.