Showing posts with label Why Filipinos are hesitant to use AI. Show all posts
Showing posts with label Why Filipinos are hesitant to use AI. Show all posts

Tuesday, January 20, 2026

DigiSense: Why Some Remain Hesitant To Use AI

While we have more active AI users today, not everyone jumped on the bandwagon when it arrived.

Some people were busy asking computers to help write, explain, or organize things. Others quietly stayed on the sidelines, unsure whether this new tool was something they needed—or something they should avoid.

This hesitation is not laziness, stubbornness, or fear of learning. In many cases, it comes from asking a very reasonable question: Why do I need to use this when I don’t see the need? And for many people, that question is not only fair—it is correct.

Image by Matheus Bertelli @ pexels.com

Hesitation is good judgment

Not all hesitation is a problem.

AI should be used as needed. It should solve a real problem, save real time, or make a task clearer or easier. If none of those apply, then choosing not to use it is a sound decision.

Using a tool simply because it exists, or because everyone else is using it, is not progress. That is blind following. And blind following—especially with technology—is a dangerous route to take.

But determining whether one needs AI is both a personal decision and, at times, a business decision.

On a personal level, hesitation is understandable. No one should be forced to adopt a tool that does not add value to their daily life. But in a workplace, the situation can be different. When AI becomes part of how problems are solved or how work is done, refusing to engage with it is no longer just hesitation. It becomes a choice between practical use and resistance to change. 

Fear of mistake

So, where is the resistance coming from? A common reason people hesitate is the fear of making mistakes.

For teachers, workers, and small business owners, there is a quiet worry: What if I use it wrong? What if I rely on it and something goes bad? Unlike social media, where mistakes are mostly embarrassing, mistakes with work, school, or money feel riskier.

Some worry that using AI might lead them to submit wrong information, offend someone, or break a rule they do not fully understand. When responsibilities are real, caution feels safer than curiosity.

This hesitation often stems from not knowing the limits. When people do not understand what a tool can and cannot do, the safest option is not to use it at all.

Worries about trust and privacy

Trust and privacy are other key concerns.

People ask sensible questions: Where does my information go? Who sees what I type? Can this be used against me later? These concerns are common among parents, government workers, and anyone handling personal or sensitive information.

In communities and in online spaces where scams are common and data breaches make the news, skepticism is healthy. Many people are not rejecting AI itself; they are rejecting the careless use of it.

Being cautious with personal data is a good habit. The risk arises when people either share too much, too quickly, or avoid even safe uses because no one has explained where the line should be drawn.

Some hesitation also comes from experience. Many people have seen technology touted as a solution, only to find it slow, confusing, expensive, or unreliable later. For those dealing with unstable internet, shared devices, or limited time, learning another tool feels like an unnecessary burden.

If something already works, there is no urgency to replace it. This mindset is not resistance to progress. It is practical decision-making.

Resolving hesitation 

The answer is not convincing everyone to use AI. Because not everyone needs it. The answer is helping people understand when it is worth using and when it is not.

For instance, a small business owner who answers the same questions day after day may find AI helpful in preparing a simple set of FAQs with draft replies. It makes responses faster without taking control away. The time saved can then be spent where it matters most. That is, improving the business, solving real problems, and giving proper attention to customers who need more than a standard answer.

AI works best as a helper, not as a requirement. It can explain, summarize, organize, or suggest. But the final judgment always stays with the user. It should support thinking, not replace it.

When people see clear boundaries, trust grows. When they understand that AI use is optional, need-based, and controlled, hesitation naturally softens.

AI should earn its place by being useful. And when it does, people will adopt it carefully, and on their own terms.

That, perhaps, is the safest way forward.#nordis.net

After a year of hiatus from this blog, I am starting 2026 as Tech Columnist for Northern Dispatch, a community online paper.  I will be publishing my column pieces here.