Problems of AI: 3 Illustrative Examples

What are the problems of AI or issues with artificial intelligence? The author narrates personal experience and insights into the future.

Reflections on AI

Putting decision making into the hands of machines, the idea of artificial intelligence or AI, apparently puts humans into a precarious position. That’s because machines do not feel just like humans do. It would be disastrous in matters concerning life and death to let machines do the “discerning” or option as they respond to situations based on accumulated data.

Humans cannot do everything, so they rely on AI. So what they feed to the AI gets applied without these machines taking the accountability.

Unlike humans, machines don’t have accountability. They may be programmed to be moral, in the sense defined by the programmer, but then again, questions arise on the judgement made by machines frequently. Machine learning has its limitations.

What will happen if we use robots in wars? Do they decide who dies and who lives?

What if something goes wrong in the system? If someone surreptitiously inputs a malicious code that changes the whole program algorithm? Is it the end of humanity?

These are the apprehensions which can’t be far from reality. Sci-fi movies like AI depict the face of the future dominated by machines. How good can a machine really get?

We are no longer in this stage of projecting how AI can affect our future. It is already here working with us.

Now, AI is a reality we have to contend with. We can already see its influence in our day-to-day lives. But there are issues with artificial intelligence that we need to be aware of.

I list three real-life examples of the problems of AI based on personal experience in the following sections.

Example Problems of AI

1. Content writing misinterpretation

Some kind of AI programming cannot recognize legitimate sentences in a content I wrote a few months back. It flagged otherwise acceptable words as objectionable content. I can’t even write those words here because Google’s AI might flag this article again for objectionable content.

The specific article I wrote was about the medicinal uses of plants. Among the uses of medicinal plants include healing of health conditions like blood in the s_men (male’s reproductive fluid). Another paragraph described the feature of a plant as that resembling the comb of a c_ck (male chicken).

Again, to avoid this article getting flagged in this write up, I deliberately kept blank the consonant of these two words which the Google algorithm identified as objectionable because, taken together, it would look something like c_ck s_men.

I never had such intention of writing about these two words, which appear to be objectionable, as they have connotations that may offend the reader. In reality, it’s not the intention. The AI algorithm identified it objectionable when, in fact, those two words were taken out of context to mean something else, something malicious when the intention of those words was to heal someone who is sick.

Now, who’s malicious? Obviously, there’s something wrong with the AI, specifically its programming. Did the programmers ever think the machine would interpret things like the way it worked in my article?

This example shows how AI programming can lead to unexpected results. Thus, it needs to be updated and improved each time — after damage has been done, perhaps?

problemsofAI
Machines can mimic humans through AI.

2. Automated AI correction of spelling and grammar is wrong

I subscribed to a popular spelling and grammar correction application, proudly marketed as AI powered, hoping to speed up editing of articles I write and also those submitted by writers on this site. All went well for several months, I think even years as I forgot how long I have been on subscription.

However, I noticed that the best I could get from spelling and grammar correction application like this are very common mistakes in spelling and basic grammar on subject verb agreement. And the suggestions get repeated each time, giving me familiarity on the expected sentence construction even before AI suggestion.

I feel there’s no more need for the spelling and grammar application as I became familiar with the suggestions. The auto-suggestions are predictable. It’s as if I’m now the AI programmed by a machine.

Degrading function because of inappropriate data input?

I would admit that suggestions on framing sentences look good, but I find the suggestions not so helpful in the latter years or occasionally. Something appeared to have gone wrong, as the AI learned by itself. It no longer works as well as it did. More data input to the AI appears to have muddled its correct programming.

Also, I noticed that the spelling and grammar corrections do not match what I have in mind. Meaning, it sometimes corrects my writing to mean another thing. Thus, I reject the suggestion. When I do so, it asks me if I’m sure of what I’m doing. Of course, I am.

The AI now “learns” from me and would accept what I wrote as correct. I would say my statement would be more appropriate in certain situations, but not in all the sentences I write.

The point is, the users might end up feeding wrong programming to the AI as it adapts to users’ suggestions. And the writing AI suggests unacceptable sentences. It can only suggest to the user but not teach the user the correct sentence construction in a particular context.

Final refinements rest on the author

My conclusion is that if you are a talented writer, better do the correction of sentence construction by yourself. Using AI in content writing is not perfect. Or more accurately, it will never match human creativity. I expounded on this issue in my recent articles on how to write with AI and AI blogging.

For me, the price tag of an AI grammar and spelling subscription does not justify the spelling and grammar correction suggestions and purported better writing touted by the marketers. The price should be lower than it should be. It’s only the price intended for the spelling and basic grammar corrections. More complex sentences are better written by the author infused with his or her own writing style.

Although the proponents of AI see its advantages, there are also issues with artificial intelligence that they must contend with. You can automate almost everything. But issues that require careful judgment and consideration, particularly where human life is involved, need to be left to humans who care.

Machines can make life easier, but these human inventions can be a threat to humans unless treated with utmost care and infused with accountability.

3. You cannot appeal to AI-powered decision making

Human decision making can be delayed whenever there’s doubt or a need to defer such a decision. That’s because emotions play a very important role in matters that are difficult to deal with, given the lifetime consequences.

In business, however, profitability comes first. When things threaten the profitability of a business, automation is an easy scapegoat that will take the blame.

For example, given the millions of websites that large corporations have to deal with, resorting to AI-powered decision support systems would be a sensible option. Final decisions on customer concerns may be given automatically, and generic, albeit vague, answers to specific questions are received.

Appealing to robots or AI-powered machines would be like talking to Terminator’s hand. “Talk to the hand,” he says, dismissing any appeal for consideration.

Precautions for a Better Future

So what will artificial intelligence bring us in the future?

While there are advantages derived from artificial intelligence, too much reliance on it could bring in unexpected problems that can prove harmful to its creators. Thus, certain ethical principles for machine learning must be followed by programmers to ensure that the purpose of automation will not be defeated.

What if programmers do not follow the ethical guidelines? That’s where a lot of problems can come in. Non-compliance with sound practices can lead to disaster — the Rise of the Machines may come into reality.

Will we allow technology to work beyond our control?