AI, ML & BS: Myths of modern artificial intelligence

Last updated on August 26th, 2022

Let’s start with the obvious: artificial intelligence (AI) is not going to put us all out of work or kill everyone. There are plenty of articles already arguing that AI is not a threat to humanity, that the real problem with AI is human bias, or that the threat to jobs from AI is overblown

Instead, we want to talk about a different set of AI myths. The tendency to project AI’s impact into the far future is understandable, but the hyperbolic scenarios that result obscure more insidious misperceptions of AI as it exists today.

Let’s look at three in particular.

Myth #1 - AI is Easy

Like many myths, this one has a basis in reality. Artificial intelligence is an incredibly powerful tool that can dig into mountains of data to find nuggets of insight that would be otherwise invisible to human beings. However, this has led to the misconception that AI can easily solve any problem. For all its power, AI is still a tool and, just like other tools, you need to know what you’re working on in order to get the most out of it.

Generic “AI” providers might argue that domain knowledge doesn’t matter because they’re just looking for patterns in the data. If I can use AI to accomplish such diverse tasks as identifying images of cats, beating professional Go players, and predicting engine failures, why would domain knowledge matter?

Artificial Intelligence

In fact, domain knowledge is crucial to the success of an AI project for two reasons.

For one, it simply isn’t possible to achieve the best results in AI without domain knowledge, because data scientists can use that knowledge to inform their feature engineering or model architecture. For example, they might exclude certain features from analysis because they’re known to be irrelevant or change the algorithm’s cost function to reflect a desired outcome (e.g., first-time-yield or FTY). The reason image recognition algorithms are so general purpose is that images share many features in common (e.g., edges, shapes, etc.). However, for more complex problems involving many different signals, there isn’t necessarily much commonality between them. Domain knowledge helps to fill the ensuing gaps.

The other reason domain knowledge matters to AI is communication between client and provider. When you’re making a general purpose machine learning model, it’s easy to lose sight of the client’s objective if you don’t have a good sense of what they’re trying to accomplish and what that means physically. Effectively communicating results is a serious challenge in machine learning. It’s an ongoing process and understating it by entertaining the idea that a general purpose AI solution could be sufficient is naïve.

Myth #2 - AI is Mysterious

This may sound somewhat paradoxical in relation to the first myth, but there’s actually a close connection between them. The Internet’s favorite cognitive biasThe Dunning-Kruger Effect—may help explain the origin of the myth that present-day AI is somehow inscrutable. In essence, the Dunning-Kruger Effect describes an inverse correlation between one’s knowledge of a subject and one’s confidence in one’s knowledge of that subject. Think of the differences in knowledge and certainty between a freshman after their first semester at college and a grad student before their thesis defence.

In the context of AI, it’s easy enough to find examples of the Dunning-Kruger Effect at work: take the big debate on the prospects of artificial intelligence between two people who are most definitely not experts on the topic. A recent New Yorker article by Jonathan Zittrain emphasized the ostensible mysteries of AI by introducing the concept of intellectual debt, which is accrued when we discover something that works without knowing why but act on that insight regardless, as with drug discovery through machine learning in the pharmaceutical industry.

Zittrain positions intellectual debt as a peril of overreliance on AI, but there are many applications for artificial intelligence where we have a very good understanding of how the models are working internally. With image recognition, for example, we can extract a lot of information to understand what the models are doing because they’re based on processes that were utilized and well understood by the scientists and engineers who worked on image recognition before machine learning came on the scene.

Granted, the machine learning model has a lot more variables it can adjust, but if you’re exploring more possibilities across huge data sets than a human being could ever explore, of course the results will be better. And that’s not to say that we can’t wrap our heads around it, we can. We just have to do it a few dimensions at a time.

Myth #3 - AI Turns Data into Money

The third myth of modern artificial intelligence is what you get when you combine Myth #1 and #2 with the current market interest in all things AI. If you think artificial intelligence is easy and mysterious and you’ve been paying attention to all of the acquisitions of AI companies in recent years, you might naturally conclude that artificial intelligence is a sort of modern day Philosophers’ Stone, spinning gold out of the data that companies were already collecting, waiting for just such an opportunity to arise.

It’s a request that we’ve gotten surprisingly often: “Hey, we have all this data: now you turn it into money.” Unfortunately, not all data is equally valuable and data quality makes a big difference to the success—even the viability—of an AI project. Information content is what matters; you might have 10TB of data, but if that only covers 100 examples from a set of 1,000,000 units, it’s not actually that useful. The moral here (one that bears repeating) is that data is not information, and it’s the latter that really matters.

Stock Trading

This brings us to an important point about data and generic “AI” providers. You can tell a lot about whether a company’s claims regarding AI are legitimate by its attitude toward data. If someone tells you they can provide AI without even looking at your data, that’s a sign that they’re not really using artificial intelligence. More likely, they’re falsely presenting standard business intelligence or statistical analysis as AI. Another sure sign is if their software engineers and computer scientists suddenly became “data scientists” overnight. 

It’s almost a myth in its own right: the notion that AI is actually being used by so many companies.

Fact & Fiction in Modern AI

Artificial intelligence may not be easy, or mysterious, or a guaranteed way to turn data into money, but that doesn’t mean all of the talk around it should be taken with a grain of salt. AI has already garnered a reputation for being transformative, and that’s well-deserved. We’ve seen evidence that AI can transform certain industries: Amazon, Facebook, Google, and Netflix are all generating substantially more revenue because of machine learning.

What’s really exciting is the possibility of finding other businesses that can be similarly transformed.

Share on social:

Automate root cause analysis and predict defects in real time